Footnote 26 — The Atlantic: Amazon’s new deal with the U.S. Postal Service

Footnote 26 — Megan Garber, “Amazon’s New Deal with the U.S. Postal Service: The Unlikely Alliance That Ended Sunday Mail Delivery … in 1912” (The Atlantic, November 12, 2013)

A newspaper delivery vehicle for the Sunday Mail in Brisbane, Australia (Wikimedia Commons)

With the help of an extremely 21st century company, the Postal Service is going back—in a small way—to its 17th-century roots. When the U.S. Postal Service teams up with Amazon to offer Sunday mail delivery, the move will mark the first Sunday mail delivery the U.S. has seen, with a few exceptions, for a century.

The USPS … has long been an early adopter. The system that laid, literally, the groundwork for a growing nation wasn’t just about mail; it was also about connection. It was “the sole communication lifeline of the newly formed nation.” The Founders and their followers recognized this. Until the USPS was reorganized in the 1970s, the final position in the presidential line of succession was, yep, the Postmaster. And in 1810, Congress passed a law requiring that local post offices be open for at least an hour on Sundays; most were open for much longer.  ‘Men would rush there as soon as the mail had arrived, staying on to drink and play cards.’

Despite and because of all that, the Postal Service was also … a party. As the historian Claude Fischer puts it, “post offices themselves were important community centers, where townsfolk met, heard the latest news read aloud, and just lounged about.” (The offices played that role, in part, because the Postal Service didn’t offer home delivery, even in large cities, until after 1860.) On Sundays, that town-center role was magnified. When everything else was closed but the local church, post offices were places you could go not just to pick up your mail, but also to hang out. They were taverns for the week’s tavern-less day. “Men would rush there as soon as the mail had arrived,” Fischer writes, “staying on to drink and play cards.”

Post offices, as a result, were also sources of controversy. In the 1820s, leaders from a variety of Protestant denominations campaigned to end Sunday delivery on religious grounds. Similar movements would arise over the course of the 19th century. And the objection wasn’t just to the Sunday-ness of Sunday delivery, to the fact that mail delivery on Sunday was a violation of the Sabbath. It was also to the social-ness of Sunday delivery. The six-day-delivery campaigns, Fischer writes, were “part of the churches’ wider efforts to enforce a ‘Puritan Sabbath’ against the demands of Mammon and against worldly temptations like those card games.” Exacerbating the problem, from the Puritanical perspective, was the rise in immigration among Catholics, “many of whom,” Fischer notes, “celebrated ‘Continental’ Sundays which included all sorts of secular pleasures—picnics, even beer halls—after (or instead of) church.”

The Ellisville, Illinois, Post Office, photographed on July 30, 1891 (USPS)
…………….
By the early 20th century, new technologies—the telegraph, the telephone, the train—had reduced people’s urgent reliance on the Postal Service. They could then, better than they could have before, do without Sunday deliveries. In 1912, without any debate on the matter, Congress added a rider to a funding bill. It ordered that “hereafter post offices … shall not be opened on Sundays for the purpose of delivering mail to the public.” On August 24, Taft signed the bill into law. On September 1, it was enacted.And for just over a century, that law was, with its few exceptions, obeyed. As a result, we’ve all grown up in a United States that translates the logic of the Bible—Sunday, the day of rest—to the commercial and communicational lives of its citizens. In a small way, thanks to a company that is also an early adopter—and that is also, in its way, reorganizing the nation—that is now changing. The day of rest need no longer be fully restful. If you are, that is, a member of Amazon Prime.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Amazon’s new deal with the U.S. Postal Service will reverse a century-old approach to mail.
  NOV 12 2013, 11:12 AM ET

  MEGAN GARBER is a staff writer at The Atlantic. She was formerly an assistant editor at theNieman Journalism Lab, where she wrote about innovations in the media

The Atlantic: When Is a Royal Baby a Fetus?

The Atlantic: When Is a Royal Baby a Fetus?

When Is a Royal Baby a Fetus?

Technically, right up until the moment he’s born. And yet we’ve called him a baby the whole time. What media coverage of the recent pregnancy and birth has to do with abortion politics.
by  — JULY 24 2013, 9:12 AM ET

An excerpt:


Lefteris Pitarakis/AP Images

Moral philosopher James Q. Wilson wrote that humanity “has a moral sense.” Whether that moral sense is grounded in evolution, the image of God, or some other foundation, it sometimes leads us to act better than we speak. There are surprising moments, in other words, when our pre-conscious emotional and moral wiring responds to a situation in a way our more studied judgments would not permit.  A usually callous employee comforts a just-fired coworker in genuine sympathy. A man who hasn’t acted chivalrously in all his days instinctively holds a door open for a pregnant woman. A teenager roaming in one of those teenage-mall herds apologizes to a passer-by whom her friends have just mocked.

This week, as the U.K.’s Prince William and Kate Middleton were expecting their child at any moment, the impending birth received a galaxy’s worth of media coverage. That the child would be heir to the throne was a motivating factor in all this attention, to be sure. I was interested not only for this reason but for a less-noticed one: Countless media reports bore news about the “royal baby.”

Why was this noteworthy? Because this term, to get exegetical for a moment, was not used to describe the future state of the child—once born and outside of the womb, that is. No, the American media used this phrase “royal baby” to describe the pre-born infant. It’s not strange for leading pro-life thinkers like Eric Metaxas and Denny Burk to refer to a fetus as a “baby.” It’s not strange, either, for people to refer to a child they’re expecting as a “baby,” regardless of where they stand on the issue of abortion. It is strange, though, for outlets like the New York Times and the Washington Post and Boston Globe–which purport to be neutral on the issue–to use this seemingly explosive phrase without so much as a qualification. And why is this strange? Because it codes a pro-life position into their description of the unborn child.

I am a Christian who believes deeply in the sanctity of life, so for me, this language choice is revealing. The two most common arguments made today by thoughtful pro-choicers  are as follows: a) the being in the womb has no distinct personhood when in the mother’s body, as it is only a fetus and not yet a person (as seen in this ruling of a 2004 Houston court), or b) the fetus has some hard-to-define measure of personhood, yes, but a sufficient degree less personhood than the mother such that the mother may conscionably, though sometimes painfully terminate it (as in this New York Times essay). The linchpin of both of these arguments is location, closely related todependence. If the fetus is born, it is outside the womb and relatively independent of the mother. If the fetus is unborn, it is inside the womb, part of the mother’s body, and therefore dependent on the mother and subject to her decisions.

These arguments—which really are basically one and the same—have persuaded many people. The result, virtually enshrined into media law, is this: Pre-born beings are to be called fetuses, and post-birth beings are to be called babies. Here’s the New York Times referring to aborted babies as fetuses in the Kermit Gosnell trial, for example; NPR follows the same logicas does CNN. Fetuses, it seems, are essentially subhuman. Outside of the mainstream media, the rhetoric builds from this impersonal foundation. Not only are pre-born children subhuman; they are considered “clumps of cells,” in fact, or pre-human “seeds.” In both the mainstream media and the pro-abortion movement, fetuses are future humans being knit together in a woman’s body. They are not humans while in the womb. To kill them is not to kill a human, but something not-yet human.

How strange was it, then, that leading news sources referred to the fetus of William and Kate as the “royal baby.” There were no pre-birth headlines from serious journalistic sources like “Royal Clump of Cells Eagerly Anticipated” or “Imperial Seed Soon to Sprout.” None of the web’s traffic-hoarding empires ran “Subhuman Royal Fetus Soon to Become Human!” No, over and over again, one after another, from the top of the media food chain to the bottom, Kate’s “fetus” was called, simply and pre-committedly, a baby. Why was this? Because, as I see it, the royal baby was a baby before birth. The media was right; gloriously, happily right.

Like all babies-in-womb, in the months before Kate gave birth, the royal heir was spinning around, jabbing mom at inopportune moments, reacting in sheer physical bliss to the soothing sounds of dad’s voice, getting hungry, becoming sad and even agitated when voices were raised in marital conflict, sleeping, sucking its thumb, enjoying certain kinds of music, waking mom up in the night in order to do more spinning around/kicking, and eating hungrily what mom ate.

Thoughts for Both Sides of the Abortion Debate

How, if at all, does this “royal baby” phenomenon impact the current cultural debate over abortion? Here are a few thoughts for both sides of the abortion debate, pro-life and pro-choice alike.

+++++++++++++++++++++++++++++++++++++++++++

Read more at: http://www.theatlantic.com/sexes/archive/2013/07/when-is-a-royal-baby-a-fetus/278056/

The Atlantic: Listening to Young Atheists – Lessons for a Stronger Christianity

The Atlantic: Listening to Young Atheists – Lessons for a Stronger Christianity

Listening to Young Atheists: Lessons for a Stronger Christianity – The Atlantic Online

When a Christian foundation interviewed college nonbelievers about how and why they left religion, surprising themes emerged.
 JUNE 6 2013, 8:07 AM ET

whitefieldhumeban.jpg

Left, the pastor George Whitefield; right, the philosopher David Hume (Wikimedia Commons)

“Church became all about ceremony, handholding, and kumbaya,” Phil said with a look of disgust. “I missed my old youth pastor. He actually knew the Bible.”

I have known a lot of atheists. The late Christopher Hitchens was a friend with whom I debated, road tripped, and even had a lengthy private Bible study. I have moderated Richard Dawkins and, on occasion, clashed with him. And I have listened for hours to the (often unsettling) arguments of Peter Singer and a whole host of others like him. These men are some of the public faces of the so-called “New Atheism,” and when Christians think about the subject — if they think about it at all — it is this sort of atheist who comes to mind: men whose unbelief is, as Dawkins once proudly put it, “militant.” But Phil, the atheist college student who had come to my office to share his story, was of an altogether different sort.

Phil was in my office as part of a project that began last year. Over the course of my career, I have met many students like Phil. It has been my privilege to address college students all over the world, usually as one defending the Christian worldview. These events typically attract large numbers of atheists. I like that. I find talking to people who disagree with me much more stimulating than those gatherings that feel a bit too much like a political party convention, and the exchanges with these students are mostly thoughtful and respectful. At some point, I like to ask them a sincere question:

What led you to become an atheist?

Given that the New Atheism fashions itself as a movement that is ruthlessly scientific, it should come as no surprise that those answering my question usually attribute the decision to the purely rational and objective: one invokes his understanding of science; another says it was her exploration of the claims of this or that religion; and still others will say that religious beliefs are illogical, and so on. To hear them tell it, the choice was made from a philosophically neutral position that was void of emotion.

Christianity, when it is taken seriously, compels its adherents to engage the world, not retreat from it. There are a multitude of reasons for this mandate, ranging from care for the poor, orphaned, and widowed to offering hope to the hopeless. This means that Christians must be willing to listen to other perspectives while testing their own beliefs against them — above all, as the apostle Peter tells us, “with gentleness and respect.” The non-profit I direct, Fixed Point Foundation, endeavors to bridge the gaps between various factions (both religious and irreligious) as gently and respectfully as possible. Atheists particularly fascinate me. Perhaps it’s because I consider their philosophy — if the absence of belief may be called a philosophy — historically naive and potentially dangerous. Or maybe it’s because they, like any good Christian, take the Big Questions seriously. But it was how they processed those questions that intrigued me.

To gain some insight, we launched a nationwide campaign to interview college students who are members of Secular Student Alliances (SSA) or Freethought Societies (FS). These college groups are the atheist equivalents to Campus Crusade: They meet regularly for fellowship, encourage one another in their (un)belief, and even proselytize. They are people who are not merely irreligious; they are actively, determinedly irreligious.

Using the Fixed Point Foundation website, email, my Twitter, and my Facebook page, we contacted the leaders of these groups and asked if they and their fellow members would participate in our study. To our surprise, we received a flood of enquiries. Students ranging from Stanford University to the University of Alabama-Birmingham, from Northwestern to Portland State volunteered to talk to us. The rules were simple: Tell us your journey to unbelief. It was not our purpose to dispute their stories or to debate the merits of their views. Not then, anyway. We just wanted to listen to what they had to say. And what they had to say startled us.

This brings me back to Phil.

A smart, likable young man, he sat down nervously as my staff put a plate of food before him. Like others after him, he suspected a trap. Was he being punk’d? Talking to us required courage of all of these students, Phil most of all since he was the first to do so. Once he realized, however, that we truly meant him no harm, he started talking — and for three hours we listened.

Now the president of his campus’s SSA, Phil was once the president of his Methodist church’s youth group. He loved his church (“they weren’t just going through the motions”), his pastor (“a rock star trapped in a pastor’s body”), and, most of all, his youth leader, Jim (“a passionate man”). Jim’s Bible studies were particularly meaningful to him. He admired the fact that Jim didn’t dodge the tough chapters or the tough questions: “He didn’t always have satisfying answers or answers at all, but he didn’t run away from the questions either. The way he taught the Bible made me feel smart.”

Listening to his story I had to remind myself that Phil was an atheist, not a seminary student recalling those who had inspired him to enter the pastorate. As the narrative developed, however, it became clear where things came apart for Phil. During his junior year of high school, the church, in an effort to attract more young people, wanted Jim to teach less and play more. Difference of opinion over this new strategy led to Jim’s dismissal. He was replaced by Savannah, an attractive twenty-something who, according to Phil, “didn’t know a thing about the Bible.” The church got what it wanted: the youth group grew. But it lost Phil.

An hour deeper into our conversation I asked, “When did you begin to think of yourself as an atheist?”  He thought for a moment. “I would say by the end of my junior year.”  I checked my notes. “Wasn’t that about the time that your church fired Jim?”  He seemed surprised by the connection. “Yeah, I guess it was.”

Phil’s story, while unique in its parts, was on the whole typical of the stories we would hear from students across the country. Slowly, a composite sketch of American college-aged atheists began to emerge and it would challenge all that we thought we knew about this demographic. Here is what we learned:

They had attended church

Most of our participants had not chosen their worldview from ideologically neutral positions at all, but in reaction to Christianity. Not Islam. Not Buddhism. Christianity.

The mission and message of their churches was vague

These students heard plenty of messages encouraging “social justice,” community involvement, and “being good,” but they seldom saw the relationship between that message, Jesus Christ, and the Bible. Listen to Stephanie, a student at Northwestern: “The connection between Jesus and a person’s life was not clear.” This is an incisive critique. She seems to have intuitively understood that the church does not exist simply to address social ills, but to proclaim the teachings of its founder, Jesus Christ, and their relevance to the world. Since Stephanie did not see that connection, she saw little incentive to stay. We would hear this again.

They felt their churches offered superficial answers to life’s difficult questions

When our participants were asked what they found unconvincing about the Christian faith, they spoke of evolution vs. creation, sexuality, the reliability of the biblical text, Jesus as the only way, etc. Some had gone to church hoping to find answers to these questions. Others hoped to find answers to questions of personal significance, purpose, and ethics. Serious-minded, they often concluded that church services were largely shallow, harmless, and ultimately irrelevant. As Ben, an engineering major at the University of Texas, so bluntly put it: “I really started to get bored with church.”

They expressed their respect for those ministers who took the Bible seriously

Following our 2010 debate in Billings, Montana, I asked Christopher Hitchens why he didn’t try to savage me on stage the way he had so many others. His reply was immediate and emphatic: “Because you believe it.” Without fail, our former church-attending students expressed similar feelings for those Christians who unashamedly embraced biblical teaching. Michael, a political science major at Dartmouth, told us that he is drawn to Christians like that, adding: “I really can’t consider a Christian a good, moral person if he isn’t trying to convert me.” As surprising as it may seem, this sentiment is not as unusual as you might think. It finds resonance in the well-publicized comments of Penn Jillette, the atheist illusionist and comedian: “I don’t respect people who don’t proselytize. I don’t respect that at all. If you believe that there’s a heaven and hell and people could be going to hell or not getting eternal life or whatever, and you think that it’s not really worth telling them this because it would make it socially awkward…. How much do you have to hate somebody to believe that everlasting life is possible and not tell them that?” Comments like these should cause every Christian to examine his conscience to see if he truly believes that Jesus is, as he claimed, “the way, the truth, and the life.”

Ages 14-17 were decisive

One participant told us that she considered herself to be an atheist by the age of eight while another said that it was during his sophomore year of college that he de-converted, but these were the outliers. For most, the high school years were the time when they embraced unbelief.

The decision to embrace unbelief was often an emotional one

With few exceptions, students would begin by telling us that they had become atheists for exclusively rational reasons. But as we listened it became clear that, for most, this was a deeply emotional transition as well. This phenomenon was most powerfully exhibited in Meredith. She explained in detail how her study of anthropology had led her to atheism. When the conversation turned to her family, however, she spoke of an emotionally abusive father:

“It was when he died that I became an atheist,” she said.

I could see no obvious connection between her father’s death and her unbelief. Was it because she loved her abusive father — abused children often do love their parents — and she was angry with God for his death? “No,” Meredith explained. “I was terrified by the thought that he could still be alive somewhere.”

Rebecca, now a student at Clark University in Boston, bore similar childhood scars. When the state intervened and removed her from her home (her mother had attempted suicide), Rebecca prayed that God would let her return to her family. “He didn’t answer,” she said. “So I figured he must not be real.” After a moment’s reflection, she appended her remarks: “Either that, or maybe he is [real] and he’s just trying to teach me something.”

The internet factored heavily into their conversion to atheism

When our participants were asked to cite key influences in their conversion to atheism–people, books, seminars, etc. — we expected to hear frequent references to the names of the “New Atheists.” We did not. Not once. Instead, we heard vague references to videos they had watched on YouTube or website forums.

***

Religion is a sensitive topic, and a study like this is bound to draw critics. To begin with, there is, of course, another side to this story. Some Christians will object that our study was tilted against churches because they were given no chance to defend themselves. They might justifiably ask to what extent these students really engaged with their Bibles, their churches, and the Christians around them. But that is beside the point. If churches are to reach this growing element of American collegiate life, they must first understand who these people are, and that means listening to them.

Perhaps the most surprising aspect of this whole study was the lasting impression many of these discussions made upon us.

That these students were, above all else, idealists who longed for authenticity, and having failed to find it in their churches, they settled for a non-belief that, while less grand in its promises, felt more genuine and attainable. I again quote Michael: “Christianity is something that if you really believed it, it would change your life and you would want to change [the lives] of others. I haven’t seen too much of that.”

Sincerity does not trump truth. After all, one can be sincerely wrong. But sincerity is indispensable to any truth we wish others to believe. There is something winsome, even irresistible, about a life lived with conviction. I am reminded of the Scottish philosopher and skeptic, David Hume, who was recognized among a crowd of those listening to the preaching of George Whitefield, the famed evangelist of the First Great Awakening:

“I thought you didn’t believe in the Gospel,” someone asked.  “I do not,” Hume replied. Then, with a nod toward Whitefield, he added, “But he does.”

The 20th Century’s Greatest 19th-century Statesman – Robert D. Kaplan in The Atlantic

The 20th Century’s Greatest 19th-century Statesman – Robert D. Kaplan in The Atlantic

Whatever one may think of Kissinger, anyone who pretends to an understanding of global geo-political developments should ponder this article from The Atlantic,May 2013.

In Defense of Henry Kissinger

HE WAS THE 20TH CENTURY’S GREATEST 19TH-CENTURY STATESMAN.

By Robert D. Kaplan

  —  APRIL 24 2013, 9:58 PM ET

In the summer of 2002, during the initial buildup to the invasion of Iraq, which he supported, Henry Kissinger told me he was nevertheless concerned about the lack of critical thinking and planning for the occupation of a Middle Eastern country where, as he put it, “normal politics have not been practiced for decades, and where new power struggles would therefore have to be very violent.” Thus is pessimism morally superior to misplaced optimism.

I have been a close friend of Henry Kissinger’s for some time, but my relationship with him as a historical figure began decades ago. When I was growing up, the received wisdom painted him as the ogre of Vietnam. Later, as I experienced firsthand the stubborn realities of the developing world, and came to understand the task that a liberal polity like the United States faced in protecting its interests, Kissinger took his place among the other political philosophers whose books I consulted to make sense of it all. In the 1980s, when I was traveling through Central Europe and the Balkans, I encountered A World Restored, Kissinger’s first book, published in 1957, about the diplomatic aftermath of the Napoleonic Wars. In that book, he laid out the significance of Austria as a “polyglot Empire [that] could never be part of a structure legitimized by nationalism,” and he offered a telling truth about Greece, where I had been living for most of the decade: whatever attraction the war for Greek independence had held for the literati of the 1820s, it was not born of “a revolution of middle-class origin to achieve political liberty,” he cautioned, “but a national movement with a religious basis.”

When policy makers disparage Kissinger in private, they tend to do so in a manner that reveals how much they measure themselves against him. The former secretary of state turns 90 this month. To mark his legacy, we need to begin in the 19th century.

In August of 1822, Britain’s radical intelligentsia openly rejoiced upon hearing the news of Robert Stewart’s suicide. Lord Byron, the Romantic poet and heroic adventurer, described Stewart, better known as Viscount Castlereagh, as a “cold-blooded, … placid miscreant.” Castlereagh, the British foreign secretary from 1812 to 1822, had helped organize the military coalition that defeated Napoleon and afterward helped negotiate a peace settlement that kept Europe free of large-scale violence for decades. But because the settlement restored the Bourbon dynasty in France, while providing the forces of Liberalism little reward for their efforts, Castlereagh’s accomplishment lacked any idealistic element, without which the radicals could not be mollified. Of course, this very lack of idealism, by safeguarding the aristocratic order, provided various sovereigns with the only point on which they could unite against Napoleon and establish a continent-wide peace—a peace, it should be noted, that helped Britain emerge as the dominant world power before the close of the 19th century.

One person who did not rejoice at Castlereagh’s death was Henry John Temple, the future British foreign secretary, better known as Lord Palmerston. “There could not have been a greater loss to the Government,” Palmerston declared, “and few greater to the country.” Palmerston himself would soon join the battle against the U.K.’s radical intellectuals, who in the early 1820s demanded that Britain go to war to help democracy take root in Spain, even though no vital British interest had been threatened—and even though this same intellectual class had at times shown only limited enthusiasm for the war against Napoleon, during which Britain’s very survival seemed at stake.

In a career spanning more than two decades in the Foreign Office, Palmerston was fated on occasion to be just as hated as Castlereagh. Like Castlereagh, Palmerston had only one immutable principle in foreign policy: British self-interest, synonymous with the preservation of the worldwide balance of power. But Palmerston also had clear liberal instincts. Because Britain’s was a constitutional government, he knew that the country’s self-interest lay in promoting constitutional governments abroad. He showed sympathy for the 1848 revolutions on the Continent, and consequently was beloved by the liberals. Still, Palmerston understood that his liberal internationalism, if one could call it that, was only a general principle—a principle that, given the variety of situations around the world, required constant bending. Thus, Palmerston encouraged liberalism in Germany in the 1830s but thwarted it there in the 1840s. He supported constitutionalism in Portugal, but opposed it in Serbia and Mexico. He supported any tribal chieftain who extended British India’s sphere of influence northwest into Afghanistan, toward Russia, and opposed any who extended Russia’s sphere of influence southeast, toward India—even as he cooperated with Russia in Persia.

Realizing that many people—and radicals in particular—tended to confuse foreign policy with their own private theology, Palmerston may have considered the moral condemnation that greeted him in some quarters as natural. (John Bright, the Liberal statesman, would later describe Palmerston’s tenure as “one long crime.”)

Yet without his flexible approach to the world, Palmerston could never have navigated the shoals of one foreign-policy crisis after another, helping Britain—despite the catastrophe of the Indian Mutiny in 1857—manage the transition from its ad hoc imperialism of the first half of the 19th century to the formal, steam-driven empire built on science and trade of the second half.

Decades passed before Palmerston’s accomplishments as arguably Britain’s greatest diplomat became fully apparent. In his own day, Palmerston labored hard to preserve the status quo, even as he sincerely desired a better world. “He wanted to prevent any power from becoming so strong that it might threaten Britain,” one of his biographers, Jasper Ridley, wrote. “To prevent the outbreak of major wars in which Britain might be involved and weakened,” Palmerston’s foreign policy “was therefore a series of tactical improvisations, which he carried out with great skill.”

Like Palmerston, Henry Kissinger believes that in difficult, uncertain times—times like the 1960s and ’70s in America, when the nation’s vulnerabilities appeared to outweigh its opportunities—the preservation of the status quo should constitute the highest morality. Other, luckier political leaders might later discover opportunities to encourage liberalism where before there had been none. The trick is to maintain one’s power undiminished until that moment.

Ensuring a nation’s survival sometimes leaves tragically little room for private morality. Discovering the inapplicability of Judeo-Christian morality in certain circumstances involving affairs of state can be searing. The rare individuals who have recognized the necessity of violating such morality, acted accordingly, and taken responsibility for their actions are among the most necessary leaders for their countries, even as they have caused great unease among generations of well-meaning intellectuals who, free of the burden of real-world bureaucratic responsibility, make choices in the abstract and treat morality as an inflexible absolute.

Fernando Pessoa, the early-20th-century Portuguese poet and existentialist writer, observed that if the strategist “thought of the darkness he cast on a thousand homes and the pain he caused in three thousand hearts,” he would be “unable to act,” and then there would be no one to save civilization from its enemies. Because many artists and intellectuals cannot accept this horrible but necessary truth, their work, Pessoa said, “serves as an outlet for the sensitivity [that] action had to leave behind.” That is ultimately why Henry Kissinger is despised in some quarters, much as Castlereagh and Palmerston were.

To be uncomfortable with Kissinger is, as Palmerston might say, only natural. But to condemn him outright verges on sanctimony, if not delusion. Kissinger has, in fact, been quite moral—provided, of course, that you accept the Cold War assumptions of the age in which he operated.

Because of the triumphalist manner in which the Cold War suddenly and unexpectedly ended, many have since viewed the West’s victory as a foregone conclusion, and therefore have tended to see the tough measures that Kissinger and others occasionally took as unwarranted. But for those in the midst of fighting the Cold War—who worked in the national-security apparatus during the long, dreary decades when nuclear confrontation seemed abundantly possible—its end was hardly foreseeable.

People forget what Eastern Europe was like during the Cold War, especially prior to the 1980s: the combination of secret-police terror and regime-induced poverty gave the impression of a vast, dimly lit prison yard. What kept that prison yard from expanding was mainly the projection of American power, in the form of military divisions armed with nuclear weapons. That such weapons were never used did not mean they were unnecessary. Quite the opposite, in fact: the men who planned Armageddon, far from being the Dr. Strangeloves satirized by Hollywood, were precisely the people who kept the peace.Many Baby Boomers, who lived through the Cold War but who have no personal memory of World War II, artificially separate these two conflicts. But for Kissinger, a Holocaust refugee and U.S. Army intelligence officer in occupied Germany; for General Creighton Abrams, a tank commander under George Patton in World War II and the commander of American forces in Vietnam from 1968 onward; and for General Maxwell Taylor, who parachuted into Nazi-occupied France and was later the U.S. ambassador to South Vietnam, the Cold War was a continuation of the Second World War.

Beyond Eastern Europe, revolutionary nihilists were attempting to make more Cubas in Latin America, while a Communist regime in China killed at least 20 million of its own citizens through the collectivization program known as the Great Leap Forward. Meanwhile, the North Vietnamese Communists—as ruthless a group of people as the 20th century produced—murdered perhaps tens of thousands of their own citizens before the first American troops arrived in Vietnam. People forget that it was, in part, an idealistic sense of mission that helped draw us into that conflict—the same well of idealism that helped us fight World War II and that motivated our interventions in the Balkans in the 1990s. Those who fervently supported intervention in Rwanda and the former Yugoslavia yet fail to comprehend the similar logic that led us into Vietnam are bereft of historical memory.

In Vietnam, America’s idealism collided head-on with the military limitations imposed by a difficult geography. This destroyed the political consensus in the United States about how the Cold War should be waged. Reviewing Kissinger’s book Ending the Vietnam War (2003), the historian and journalist Evan Thomas implied that the essence of Kissinger’s tragedy was that he was perennially trying to gain membership in a club that no longer existed. That club was “the Establishment,” a term that began to go out of fashion during the nation’s Vietnam trauma. The Establishment comprised all the great and prestigious personages of business and foreign policy—all male, all Protestant, men like John J. McCloy and Charles Bohlen—whose influence and pragmatism bridged the gap between the Republican and Democratic Parties at a time when Communism was the enemy, just as Fascism had recently been. Kissinger, a Jew who had escaped the Holocaust, was perhaps the club’s most brilliant protégé. His fate was to step into the vortex of foreign policy just as the Establishment was breaking up over how to extricate the country from a war that the Establishment itself had helped lead the country into.

Kissinger became President Richard Nixon’s national-security adviser in January of 1969, and his secretary of state in 1973. As a Harvard professor and “Rockefeller Republican,” Kissinger was distrusted by the anti-intellectual Republican right wing. (Meanwhile, the Democratic Party was slipping into the de facto quasi-isolationism that would soon be associated with George McGovern’s “Come Home, America” slogan.) Nixon and Kissinger inherited from President Lyndon Johnson a situation in which almost 550,000 American troops, as well as their South Vietnamese allies (at least 1 million soldiers all told), were fighting a similar number of North Vietnamese troops and guerrillas. On the home front, demonstrators—drawn in large part from the nation’s economic and educational elite—were demanding that the United States withdraw all its troops virtually immediately.

Some prominent American protesters even visited North Vietnam to publicly express solidarity with the enemy. The Communists, in turn, seduced foreign supporters with soothing assurances of Hanoi’s willingness to compromise. When Charles de Gaulle was negotiating a withdrawal of French troops from Algeria in the late 1950s and early 1960s (as Kissinger records in Ending the Vietnam War), the Algerians knew that if they did not strike a deal with him, his replacement would certainly be more hard-line. But the North Vietnamese probably figured the opposite—that because of the rise of McGovernism in the Democratic Party, Nixon and Kissinger were all that stood in the way of American surrender. Thus, Nixon and Kissinger’s negotiating position was infinitely more difficult than de Gaulle’s had been.

Kissinger found himself caught between liberals who essentially wanted to capitulate rather than negotiate, and conservatives ambivalent about the war who believed that serious negotiations with China and the Soviet Union were tantamount to selling out. Both positions were fantasies that only those out of power could indulge.

Further complicating Kissinger’s problem was the paramount assumption of the age—that the Cold War would have no end, and therefore regimes like those in China and the Soviet Union would have to be dealt with indefinitely. Hitler, a fiery revolutionary, had expended himself after 12 bloody years. But Mao Zedong and Leonid Brezhnev oversaw dull, plodding machines of repression that were in power for decades—a quarter century in Mao’s case, and more than half a century in Brezhnev’s. Neither regime showed any sign of collapse. Treating Communist China and the Soviet Union as legitimate states, even while Kissinger played China off against the Soviet Union and negotiated nuclear-arms agreements with the latter, did not constitute a sellout, as some conservatives alleged. It was, rather, a recognition of America’s “eternal and perpetual interests,” to quote Palmerston, refitted to an age threatened by thermonuclear war.

In the face of liberal capitulation, a conservative flight from reality, and North Vietnam’s relentlessness, Kissinger’s task was to withdraw from the region in a way that did not betray America’s South Vietnamese allies. In doing so, he sought to preserve America’s powerful reputation, which was crucial for dealing with China and the Soviet Union, as well as the nations of the Middle East and Latin America. Sir Michael Howard, the eminent British war historian, notes that the balance-of-power ethos to which Kissinger subscribes represents the middle ground between “optimistic American ecumenicism” (the basis for many global-disarmament movements) and the “war culture” of the American Wild West (in recent times associated with President George W. Bush). This ethos was never cynical or amoral, as the post–Cold War generation has tended to assert. Rather, it evinced a timeless and enlightened principle of statesmanship.

Kissinger confers with President Lyndon Johnson not long after being appointed to Richard Nixon’s national-security team. December 5, 1968. (Associated Press)

Kissinger confers with President Lyndon Johnson not long after being appointed to Richard Nixon’s national-security team. December 5, 1968 (Associated Press)

Within two years, Nixon and Kissinger reduced the number of American troops in Vietnam to 156,800; the last ground ­combat forces left three and a half years after Nixon took office. It had taken Charles de Gaulle longer than that to end France’s involvement in Algeria. (Frustration over the failure to withdraw even quicker rests on two difficult assumptions: that the impossibility of preserving South Vietnam in any form was accepted in 1969, and that the North Vietnamese had always been negotiating in good faith. Still, the continuation of the war past 1969 will forever be Nixon’s and Kissinger’s original sin.)

That successful troop withdrawal was facilitated by a bombing incursion into Cambodia—primarily into areas replete with North Vietnamese military redoubts and small civilian populations, over which the Cambodian government had little control. The bombing, called “secret” by the media, was public knowledge during 90 percent of the time it was carried out, wrote Samuel Huntington, the late Harvard professor who served on President Jimmy Carter’s National Security Council. The early secrecy, he noted, was to avoid embarrassing Cambodia’s Prince Norodom Sihanouk and complicating peace talks with the North Vietnamese.

The troop withdrawals were also facilitated by aerial bombardments of North Vietnam. Victor Davis Hanson, the neoconservative historian, writes that, “far from being ineffective and indiscriminate,” as many critics of the Nixon­-Kissinger war effort later claimed, the Christmas bombings of December 1972 in particular “brought the communists back to the peace table through its destruction of just a few key installations.” Hanson may be a neoconservative, but his view is hardly a radical reinterpretation of history; in fact, he is simply reading the news accounts of the era. Soon after the Christmas bombings, Malcolm W. Browne of The New York Times found the damage to have been “grossly overstated by North Vietnamese propaganda.” Peter Ward, a reporter for The Baltimore Sun, wrote, “Evidence on the ground disproves charges of indiscriminate bombing. Several bomb loads obviously went astray into civilian residential areas, but damage there is minor, compared to the total destruction of selected targets.”

The ritualistic vehemence with which many have condemned the bombings of North Vietnam, the incursion into Cambodia, and other events betrays, in certain cases, an ignorance of the facts and of the context that informed America’s difficult decisions during Vietnam.

The troop withdrawals that Nixon and Kissinger engineered, while faster than de Gaulle’s had been from Algeria, were gradual enough to prevent complete American humiliation. This preservation of America’s global standing enabled the president and the secretary of state to manage a historic reconciliation with China, which helped provide the requisite leverage for a landmark strategic arms pact with the Soviet Union—even as, in 1970, Nixon and Kissinger’s threats to Moscow helped stop Syrian tanks from crossing farther into Jordan and toppling King Hussein. At a time when defeatism reigned, Kissinger improvised in a way that would have impressed Palmerston.

Yes, Kissinger’s record is marked by nasty tactical miscalculations—mistakes that have spawned whole libraries of books. But the notion that the Nixon administration might have withdrawn more than 500,000 American troops from Vietnam within a few months in 1969 is problematic, especially when one considers the complexities that smaller and more gradual withdrawals in Bosnia, Iraq, and Afghanistan later imposed on military planners. (And that’s leaving aside the diplomatic and strategic fallout beyond Southeast Asia that America’s sudden and complete betrayal of a longtime ally would have generated.)

Despite the North Vietnamese invasion of eastern Cambodia in 1970, the U.S. Congress substantially cut aid between 1971 and 1974 to the Lon Nol regime, which had replaced Prince Sihanouk’s, and also barred the U.S. Air Force from helping Lon Nol fight against the Khmer Rouge. Future historians will consider those actions more instrumental in the 1975 Khmer Rouge takeover of Cambodia than Nixon’s bombing of sparsely populated regions of Cambodia six years earlier.

When Saigon fell to the Communists, in April of 1975, it was after a heavily Democratic Congress drastically cut aid to the South Vietnamese. The regime might not have survived even if Congress had not cut aid so severely. But that cutoff, one should recall, was not merely a statement about South Vietnam’s hopelessness; it was a consequence of Watergate, in which Nixon eviscerated his own influence in the capital, and seriously undermined Gerald Ford’s incoming administration. Kissinger’s own words in Ending the Vietnam War deserve to echo through the ages:

None of us could imagine that a collapse of presidential authority would follow the expected sweeping electoral victory [of Nixon in 1972]. We were convinced that we were working on an agreement that could be sustained by our South Vietnamese allies with American help against an all-­out invasion. Protesters could speak of Vietnam in terms of the excesses of an aberrant society, but when my colleagues and I thought of Vietnam, it was in terms of dedicated men and women—soldiers and Foreign Service officers—who had struggled and suffered there and of our Vietnamese associates now condemned to face an uncertain but surely painful fate. These Americans had honestly believed that they were defending the cause of freedom against a brutal enemy in treacherous jungles and distant rice paddies. Vilified by the media, assailed in Congress, and ridiculed by the protest movement, they had sustained America’s idealistic tradition, risking their lives and expending their youth on a struggle that American leadership groups had initiated, then abandoned, and finally disdained.

Kissinger’s diplomatic achievements reached far beyond Southeast Asia. Between 1973 and 1975, Kissinger, serving Nixon and then Gerald Ford, steered the Yom Kippur War toward a stalemate that was convenient for American interests, and then brokered agreements between Israel and its Arab adversaries for a separation of forces. Those deals allowed Washington to reestablish diplomatic relations with Egypt and Syria for the first time since their rupture following the Six ­Day War in 1967. The agreements also established the context for the Egyptian-­Israeli peace treaty of 1979, and helped stabilize a modus vivendi between Israel and Syria that has lasted well past the turn of the 21st century.

In the fall of 1973, with Chile dissolving into chaos and open to the Soviet bloc’s infiltration as a result of Salvador Allende’s anarchic and incompetent rule, Nixon and Kissinger encouraged a military coup led by General Augusto Pinochet, during which thousands of innocent people were killed. Their cold moral logic was that a right-wing regime of any kind would ultimately be better for Chile and for Latin America than a leftist regime of any kind—and would also be in the best interests of the United States. They were right—though at a perhaps intolerable cost.

While much of the rest of Latin America dithered with socialist experiments, in the first seven years of Pinochet’s regime, the number of state companies in Chile went from 500 to 25—a shift that helped lead to the creation of more than 1 million jobs and the reduction of the poverty rate from roughly one-­third of the population to as low as one-tenth. The infant mortality rate also shrank, from 78 deaths per 1,000 births to 18. The Chilean social and economic miracle has become a paradigm throughout the developing world, and in the ex­-Communist world in particular. Still, no amount of economic and social gain justifies almost two decades of systematic torture perpetrated against tens of thousands of victims in more than 1,000 detention centers.

But real history is not the trumpeting of ugly facts untempered by historical and philosophical context—the stuff of much investigative journalism. Real history is built on constant comparison with other epochs and other parts of the world. It is particularly useful, therefore, to compare the records of the Ford and Carter administrations in the Horn of Africa, and especially in Ethiopia—a country that in the 1970s was more than three times as populous as Pinochet’s Chile.

In his later years, Kissinger has not been able to travel to a number of countries where legal threats regarding his actions in the 1970s in Latin America hang over his head. Yet in those same countries, Jimmy Carter is regarded almost as a saint. Let’s consider how Carter’s morality stacks up against Kissinger’s in the case of Ethiopia, which, like Angola, Nicaragua, and Afghanistan, was among the dominoes that became increasingly unstable and then fell in the months and years following Saigon’s collapse, partly disproving another myth of the Vietnam antiwar protest movement—that the domino theory was wrong.

As I’ve written elsewhere, including in my 1988 book, Surrender or Starve, the left-leaning Ethiopian Dergue and its ascetic, pitiless new leader, Mengistu Haile Mariam, had risen to power while the U.S. was preoccupied with Watergate and the fall of South Vietnam. Kissinger, now President Ford’s secretary of state, tried to retain influence in Ethiopia by continuing to provide some military assistance to Addis Ababa. Had the United States given up all its leverage in Ethiopia, the country might have moved to the next stage and become a Soviet satellite, with disastrous human-­rights consequences for its entire population.

Ford and Kissinger were replaced in January of 1977 by Jimmy Carter and his secretary of state, Cyrus Vance, who wanted a policy that was both more attuned to and less heavy-handed toward sub-Saharan Africa. In the Horn of Africa, this translated immediately into a Cold War disadvantage for America, because the Soviets—spurred on by the fall of South Vietnam—were becoming more belligerent, and more willing to expend resources, than ever.

With Ethiopia torn apart by revolutionary turmoil, the Soviets used their Somali clients as a lever against Addis Ababa. Somalia then was a country of only 3 million nomads, but Ethiopia had an urbanized population 10 times that size: excellent provender for the mechanized African satellite that became Leonid Brezhnev’s supreme objective. The Soviets, while threatening Ethiopia by supplying its rival with weapons, were also offering it military aid—the classic carrot-­and-­stick strategy. Yet partly because of the M-­60 tanks and F­-5 warplanes that Mengistu was still—largely thanks to Kissinger—receiving from the United States, the Ethiopian leader was hesitant about undertaking the disruptive task of switching munitions suppliers for an entire army.

In the spring of 1977, Carter cut off arms deliveries to Ethiopia because of its human-rights record. The Soviets dispatched East German security police to Addis Ababa to help Mengistu consolidate his regime, and invited the Ethiopian ruler to Moscow for a week-long state visit. Then Cuban advisers visited Ethiopia, even while tanks and other equipment arrived from pro-Soviet South Yemen. In the following months, with the help of the East Germans, the Dergue gunned down hundreds of Ethiopian teenagers in the streets in what came to be known as the “Red Terror.”

Still, all was not lost—at least not yet. The Ethiopian Revolution, leftist as it was, showed relatively few overt signs of anti-­Americanism. Israel’s new prime minister, Menachem Begin, in an attempt to save Ethiopian Jews, beseeched Carter not to close the door completely on Ethiopia and to give Mengistu some military assistance against the Somali advance.

But Begin’s plea went unheeded. The partial result of Carter’s in­ action was that Ethiopia went from being yet another left-leaning regime to a full-­fledged Marxist state, in which hundreds of thousands of people died in collectivization and “villagization” schemes—to say nothing of the hundreds of thousands who died in famines that were as much a consequence of made-­in-­Moscow agricultural policies as they were of drought.

Ethiopians should have been so lucky as to have had a Pinochet.

The link between Carter’s decision not to play Kissingerian power politics in the Horn of Africa and the mass deaths that followed in Ethiopia is more direct than the link between Nixon’s incursion into a rural area of Cambodia and the Khmer Rouge takeover six years later.

In the late 19th century, Lord Palmerston was still a controversial figure. By the 20th, he was considered by many to have been one of Britain’s greatest foreign ministers. Kissinger’s reputation will follow a similar path. Of all the memoirs written by former American secretaries of state and national­-security advisers during the past few decades, his are certainly the most vast and the most intellectually stimulating, revealing the elaborate historical and philosophical milieu that surround difficult foreign-­policy decisions. Kissinger will have the final say precisely because he writes so much better for a general audience than do most of his critics. Mere exposé often has a shorter shelf life than the work of a statesman aware of his own tragic circumstances and able to connect them to a larger pattern of events. A colleague of mine with experience in government once noted that, as a European-­style realist, Kissinger has thought more about morality and ethics than most self­-styled moralists. Realism is about the ultimate moral ambition in foreign policy: the avoidance of war through a favorable balance of power.

Aside from the successful interventions in the Balkans, the greatest humanitarian gesture in my own lifetime was President Richard Nixon’s trip to the People’s Republic of China in 1972, engineered by Kissinger. By dropping the notion that Taiwan was the real China, by giving China protection against the Soviet Union, and by providing assurances against an economically resurgent Japan, the two men helped place China in a position to devote itself to peaceful economic development; China’s economic rise, facilitated by Deng Xiaoping, would lift much of Asia out of poverty. And as more than 1 billion people in the Far East saw a dramatic improvement in living standards, personal freedom effloresced.

Pundits chastised Kissinger for saying, in 1973, that Jewish emi­ gration from the Soviet Union was “not an American concern.” But as J. J. Goldberg of The Jewish Daily Forward was careful to note (even while being very critical of Kissinger’s cynicism on the subject), “Emigration rose dramatically under Kissinger’s detente policy”— but “plummeted” after the 1974 passage of the Jackson­-Vanik amendment, which made an open emigration policy a precondition for normal U.S.­Soviet trade relations; aggrieved that the Americans would presume to dictate their emigration policies, the Soviets began authorizing fewer exit visas. In other words, Kissinger’s realism was more effective than the humanitarianism of Jewish groups in addressing a human­-rights concern.

Kissinger is a Jewish intellectual who recognizes a singular unappealing truth: that the Republican Party, its strains of anti-Semitism in certain periods notwithstanding, was better able to protect America than the Democratic Party of his era, because the Republicans better understood and, in fact, relished the projection of American power at a juncture in the Cold War when the Democrats were undermined by defeatism and quasi-­isolationism. (That Kissinger-­style realism is now more popular in Barack Obama’s White House than among the GOP indicates how far today’s Republicans have drifted from their core values.)

But unlike his fellow Republicans of the Cold War era—dull and practical men of business, blissfully unaware of what the prestigious intellectual journals of opinion had to say about them—Kissinger has always been painfully conscious of the de­ gree to which he is loathed. He made life-­and-death decisions that affected millions, entailing many messy moral compromises. Had it not been for the tough decisions Nixon, Ford, and Kissinger made, the United States might not have withstood the damage caused by Carter’s bouts of moralistic ineptitude; nor would Ronald Reagan have had the luxury of his successfully executed Wilsonianism. Henry Kissinger’s classical realism—as expressed in both his books and his statecraft—is emotionally unsatisfying but analytically timeless. The degree to which Republicans can recover his sensibility in foreign policy will help determine their own prospects for regaining power.

This article available online at:

http://www.theatlantic.com/magazine/archive/2013/05/the-statesman/309283/

Media Bias

Media Bias

Why the lack of coverage by national media?

(Props to Atlantic and Conor Friedersdorf)

Why Dr. Kermit Gosnell’s Trial Should Be a Front-Page Story

The dead babies. The exploited women. The racism. The numerous governmental failures. It is thoroughly newsworthy.
APR 12 2013, 10:14 AM ET

Please note: This post contains graphic descriptions and imagery.

The grand jury report in the case of Kermit Gosnell, 72, is among the most horrifying I’ve read. “This case is about a doctor who killed babies and endangered women. What we mean is that he regularly and illegally delivered live, viable babies in the third trimester of pregnancy – and then murdered these newborns by severing their spinal cords with scissors,” it states. “The medical practice by which he carried out this business was a filthy fraud in which he overdosed his patients with dangerous drugs, spread venereal disease among them with infected instruments, perforated their wombs and bowels – and, on at least two occasions, caused their deaths.”

Charged with seven counts of first-degree murder, Gosnell is now standing trial in a Philadelphia courtroom. An NBC affiliate’s coverage includes testimony as grisly as you’d expect. “An unlicensed medical school graduate delivered graphic testimony about the chaos at a Philadelphia clinic where he helped perform late-term abortions,” the channel reports. “Stephen Massof described how he snipped the spinal cords of babies, calling it, ‘literally a beheading. It is separating the brain from the body.’ He testified that at times, when women were given medicine to speed up their deliveries, ‘it would rain fetuses. Fetuses and blood all over the place.'”

One former employee described hearing a baby screaming after it was delivered during an abortion procedure. “I can’t describe it. It sounded like a little alien,” she testified. Said the Philadelphia Inquirer in its coverage, “Prosecutors have cited the dozens of jars of severed baby feet as an example of Gosnell’s idiosyncratic and illegal practice of providing abortions for cash to poor women pregnant longer than the 24-week cutoff for legal abortions in Pennsylvania.”

Until Thursday, I wasn’t aware of this story. It has generated sparse coverage in the national media, and while it’s been mentioned in RSS feeds to which I subscribe, I skip past most news items. I still consume a tremendous amount of journalism. Yet had I been asked at a trivia night about the identity of Kermit Gosnell, I would’ve been stumped and helplessly guessed a green Muppet. Then I saw Kirsten Power’s USA Today column. She makes a powerful, persuasive case that the Gosnell trial ought to be getting a lot more attention in the national press than it is getting.

The media criticism angle interests me. But I agree that the story has been undercovered, and I happen to be a working journalist, so I’ll begin by telling the rest of the story for its own sake. Only then will I explain why I think it deserves more coverage than it has gotten, although it ought to be self-evident by the time I’m done distilling the grand jury’s allegations. Grand juries aren’t infallible. This version of events hasn’t been proven in a court of law. But journalists routinely treat accounts given by police, prosecutors and grand juries as at least plausible if not proven. Try to decide, as you hear the state’s side of the case, whether you think it is credible, and if so, whether the possibility that some or all this happened demands massive journalistic scrutiny.

* * *

On February 18, 2010, the FBI raided the “Women’s Medical Society,” entering its offices about 8:30 p.m. Agents expected to find evidence that it was illegally selling prescription drugs. On entering, they quickly realized something else was amiss. In the grand jury report’s telling, “There was blood on the floor. A stench of urine filled the air. A flea-infested cat was wandering through the facility, and there were cat feces on the stairs. Semi-conscious women scheduled for abortions were moaning in the waiting room or the recovery room, where they sat on dirty recliners covered with blood-stained blankets. All the women had been sedated by unlicensed staff.” Authorities had also learned about the patient that died at the facility several months prior.Public health officials inspected the surgery rooms. “Instruments were not sterile,” the grand jury states. “Equipment was rusty and outdated. Oxygen equipment was covered with dust, and had not been inspected. The same corroded suction tubing used for abortions was the only tubing available for oral airways if assistance for breathing was needed. There was no functioning resuscitation or even monitoring equipment, except for a single blood pressure cuff.” Upon further inspection, “the search team discovered fetal remains haphazardly stored throughout the clinic – in bags, milk jugs, orange juice cartons, and even in cat-food containers.”And “Gosnell admitted to Detective Wood that at least 10 to 20 percent of the fetuses were probably older than 24 weeks in gestation – even though Pennsylvania law prohibits abortions after 24 weeks. In some instances, surgical incisions had been made at the base of the fetal skulls.” Gosnell’s medical license was quickly suspended. 18 days later, The Department of Health filed papers to start the process of closing the clinic. The district attorney submitted the case to the grand jury on May 4, 2010. Testimony was taken from 58 witnesses. Evidence was examined.

In Pennsylvania, most doctors won’t perform abortions after the 20th week, many for health reasons, others for moral reasons. Abortions after 24 weeks are illegal. Until 2009, Gosnell reportedly performed mostly first and second trimester abortions. But his clinic had come to develop a bad reputation, and could attract only women who couldn’t get an abortion elsewhere, former employees have said. “Steven Massof estimated that in 40 percent of the second-trimester abortions performed by Gosnell, the fetuses were beyond 24 weeks gestational age,” the grand jury states. “Latosha Lewis testified that Gosnell performed procedures over 24 weeks ‘too much to count,’ and ones up to 26 weeks ‘very often.’ …in the last few years, she testified, Gosnell increasingly saw out-of-state referrals, which were all second-trimester, or beyond. By these estimates, Gosnell performed at least four or five illegal abortions every week.”

The grand jury report includes an image of a particularly extreme case (the caption is theirs, not mine):

grand jury report image.png
That photo pertains to an unusual case, in that the mother had to seek help at a hospital after the abortion she sought at Gosnell’s office went awry. The grand jury report summarizes a more typical late-term abortion, as conducted at the clinic, concluding with the following passage:

When you perform late-term “abortions” by inducing labor, you get babies. Live, breathing, squirming babies. By 24 weeks, most babies born prematurely will survive if they receive appropriate medical care. But that was not what the Women’s Medical Society was about. Gosnell had a simple solution for the unwanted babies he delivered: he killed them. He didn’t call it that. He called it “ensuring fetal demise.” The way he ensured fetal demise was by sticking scissors into the back of the baby’s neck and cutting the spinal cord. He called that “snipping.”

Over the years, there were hundreds of “snippings.” Sometimes, if Gosnell was unavailable, the “snipping” was done by one of his fake doctors, or even by one of the administrative staff.

But all the employees of the Women’s Medical Society knew. Everyone there acted as if it wasn’t murder at all. Most of these acts cannot be prosecuted, because Gosnell destroyed the files. Among the relatively few cases that could be specifically documented, one was Baby Boy A. His 17-year-old mother was almost 30 weeks pregnant — seven and a half months — when labor was induced. An employee estimated his birth weight as approaching six pounds. He was breathing and moving when Gosnell severed his spine and put the body in a plastic shoebox for disposal. The doctor joked that this baby was so big he could “walk me to the bus stop.” Another, Baby Boy B, whose body was found at the clinic frozen in a one-gallon spring-water bottle, was at least 28 weeks of gestational age when he was killed. Baby C was moving and breathing for 20 minutes before an assistant came in and cut the spinal cord, just the way she had seen Gosnell do it so many times. And these were not even the worst cases.

Abuse of Women Patients
What little media coverage there’s been in the case has understandably focused on the murder allegations. The grand jury report also makes clear how horrific Women’s Medical Society was for the patients.

The unsanitary conditions were just the beginning.

One woman “was left lying in place for hours after Gosnell tore her cervix and colon while trying, unsuccessfully, to extract the fetus,” the report states. Another patient, 19, “was held for several hours after Gosnell punctured her uterus. As a result of the delay, she fell into shock from blood loss, and had to undergo a hysterectomy.” A third patient “went into convulsions during an abortion, fell off the procedure table, and hit her head on the floor. Gosnell wouldn’t call an ambulance, and wouldn’t let the woman’s companion leave the building so that he could call an ambulance.”

Often times, women given drugs to induce labor delivered before the doctor even arrived at work.

Said one former employee:

If… a baby was about to come out, I would take the woman to the bathroom, they would sit on the toilet and basically the baby would fall out and it would be in the toilet and I would be rubbing her back and trying to calm her down for two, three, four hours until Dr. Gosnell comes.

She would not move.

One patient died:

She was a 41-year-old, refugee who had recently come to the United States from a resettlement camp in Nepal. When she arrived at the clinic, Gosnell, as usual, was not there. Office workers had her sign various forms that she could not read, and then began doping her up. She received repeated unmonitored, unrecorded intravenous injections of Demerol, a sedative seldom used in recent years because of its dangers. Gosnell liked it because it was cheap. After several hours, Mrs. Mongar simply stopped breathing. When employees finally noticed, Gosnell was called in and briefl y attempted to give CPR. He couldn’t use the defibrillator (it was broken); nor did he administer emergency medications that might have restarted her heart. After further crucial delay, paramedics finally arrived, but Mrs.Mongar was probably brain dead before they were even called. In the meantime, the clinic staff hooked up machinery and rearranged her body to make it look like they had been in the midst of a routine, safe abortion procedure.

Even then, there might have been some slim hope of reviving Mrs. Mongar. The paramedics were able to generate a weak pulse. But, because of the cluttered hallways and the padlocked emergency door, it took them over twenty minutes just to find a way to get her out of the building. Doctors at the hospital managed to keep her heart beating, but they never knew what they were trying to treat, because Gosnell and his staff lied about how much anesthesia they had given, and who had given it. By that point, there was no way to restore any neurological activity. Life support was removed the next day. Karnamaya Mongar was pronounced dead.

Another provocative detail: A former employee testified “that white patients often did not have to wait in the same dirty rooms as black and Asian clients. Instead, Gosnell would escort them up the back steps to the only clean office — O’Neill’s — and he would turn on the TV for them. Mrs. Mongar, she said, would have been treated ‘no different from the rest of the Africans and Asians.'”

Said the employee:

Like if a girl — the black population was — African population was big here. So he didn’t mind you medicating your African American girls, your Indian girl, but if you had a white girl from the suburbs, oh, you better not medicate her. You better wait until he go in and talk to her first. And one day I said something to him and he was like, that’s the way of the world. Huh?

And he brushed it off and that was it.

Anesthesia was frequently dispensed by employees who were neither legally permitted nor trained to do it, including a 15-year-old high school student who worked at the clinic, the report states.

Most employees did as they were told, but one objected:

Marcella Stanley Choung, who told us that her “training” for anesthesia consisted of a 15-minute description by Gosnell and reading a chart he had posted in a cabinet. She was so uncomfortable medicating patients, she said, that she “didn’t sleep at night.” She knew that if she made even a small error, “I can kill this lady, and I’m not jail material.” One night in 2002, when she found herself alone with 15 patients, she refused Gosnell’s directives to medicate them. She made an excuse, went to her car, and drove away, never to return. Choung immediately filed a complaint with the Department of State, but the department never acted on it.

The Failure to Stop It
That brings us to a subject you’ve perhaps been wondering about: How on earth did this go on for so long without anyone stopping it? The grand jury delved into that very question in their report. I’m going to excerpt it at length, because it bears directly on the question that will concern us afterward: has this story gotten an appropriate amount of attention from the news media?

Here is the grand jury on oversight failures:

Pennsylvania is not a third-world country. There were several oversight agencies
that stumbled upon and should have shut down Kermit Gosnell long ago. But none of them did…

The first line of defense was the Pennsylvania Department of Health. The department’s job is to audit hospitals and outpatient medical facilities, like Gosnell’s, to make sure that they follow the rules and provide safe care. The department had contact with the Women’s Medical Society dating back to 1979, when it first issued approval to open an abortion clinic. It did not conduct another site review until 1989, ten years later. Numerous violations were already apparent, but Gosnell got a pass when he promised to fix them. Site reviews in 1992 and 1993 also noted various violations, but again failed to ensure they were corrected.

But at least the department had been doing something up to that point, however ineffectual. After 1993, even that pro form a effort came to an end. Not because of administrative ennui, although there had been plenty. Instead, the Pennsylvania Department of Health abruptly decided, for political reasons, to stop inspecting abortion clinics at all… The only exception to this live-and-let-die policy was supposed to be for complaints dumped directly on the department’s doorstep. Those, at least, would be investigated. Except that there were complaints about Gosnell, repeatedly. Several different attorneys, representing women injured by Gosnell, contacted the department. A doctor from Children’s Hospital of Philadelphia hand-delivered a complaint, advising the department that numerous patients he had referred for abortions came back from Gosnell with the same venereal disease. The medical examiner of Delaware County informed the department that Gosnell had performed an illegal abortion on a 14-year-old girl carrying a 30-week-old baby. And the department received official notice that a woman named Karnamaya Mongar had died at Gosnell’s hands.

Yet not one of these alarm bells — not even Mrs. Mongar’s death — prompted the department to look at Gosnell or the Women’s Medical Society… But even this total abdication by the Department of Health might not have been fatal. Another agency with authority in the health field, the Pennsylvania Department of State, could have stopped Gosnell single-handedly.

The Department of State, through its Board of Medicine, licenses and oversees individual physicians… Almost a decade ago, a former employee of Gosnell presented the Board of Medicine with a complaint that laid out the whole scope of his operation: the unclean, unsterile conditions; the unlicensed workers; the unsupervised sedation; the underage abortion patients; even the over-prescribing of pain pills with high resale value on the street. The department assigned an investigator, whose investigation consisted primarily of an offsite interview with Gosnell. The investigator never inspected the facility, questioned other employees, or reviewed any records. Department attorneys chose to accept this incomplete investigation, and dismissed the complaint as unconfirmed.

Shortly thereafter the department received an even more disturbing report — about a woman, years before Karnamaya Mongar, who died of sepsis after Gosnell perforated her uterus. The woman was 22 years old. A civil suit against Gosnell was settled for almost a million dollars, and the insurance company forwarded the information to the department. That report should have been all the confirmation needed for the complaint from the former employee that was already in the department’s possession. Instead, the department attorneys dismissed this complaint too… The same thing happened at least twice more: the department received complaints about lawsuits against Gosnell, but dismissed them as meaningless…

Philadelphia health department employees regularly visited the Women’s Medical Society to retrieve blood samples for testing purposes, but never noticed, or more likely never bothered to report, that anything was amiss. Another employee inspected the clinic in response to a complaint that dead fetuses were being stored in paper bags in the employees’ lunch refrigerator. The inspection confirmed numerous violations… But no follow-up was ever done… A health department representative also came to the clinic as part of a citywide vaccination program. She promptly discovered that Gosnell was scamming the program; she was the only employee, city or state, who actually tried to do something about the appalling things she saw there. By asking questions and poking around, she was able to file detailed reports identifying many of the most egregious elements of Gosnell’s practice. It should have been enough to stop him. But instead her reports went into a black hole, weeks before Karnamaya Mongar walked into the Woman’s Medical Society.

…And it wasn’t just government agencies that did nothing. The Hospital of the University of Pennsylvania and its subsidiary, Penn Presbyterian Medical Center, are in the same neighborhood as Gosnell’s office. State law requires hospitals to report complications from abortions. A decade ago, a Gosnell patient died at HUP after a botched abortion, and the hospital apparently filed the necessary report. But the victims kept coming in. At least three other Gosnell patients were brought to Penn facilities for emergency surgery; emergency room personnel said they have treated many others as well. And at least one additional woman was hospitalized there after Gosnell had begun a flagrantly illegal abortion of a 29-week-old fetus. Yet, other than the one initial report, Penn could find not a single case in which it complied with its legal duty to alert authorities to the danger. Not even when a second woman turned up virtually dead…

So too with the National Abortion Federation.

NAF is an association of abortion providers that upholds the strict est health and legal standards for its members. Gosnell, bizarrely, applied for admission shortly after Karnamaya Mongar’s death. Despite his various efforts to fool her, the evaluator from NAF readily noted that records were not properly kept, that risks were not explained, that patients were not monitored, that equipment was not available, that anesthesia was misused. It was the worst abortion clinic she had ever inspected. Of course, she rejected Gosnell’s application. She just never told anyone in authority about all the horrible, dangerous things she had seen.

The conclusion drawn at the end of the section is provocative. “Bureaucratic inertia is not exactly news. We understand that,” it states. “But we think this was something more. We think the reason no one acted is because the women in question were poor and of color, because the victims were infants without identities, and because the subject was the political football of abortion.”

A Front-Page Story
Says Kirsten Powers in her USA Today op-ed, “Let me state the obvious. This should be front page news. When Rush Limbaugh attacked Sandra Fluke, there was non-stop media hysteria. The venerable NBC Nightly News’ Brian Williamsintoned, ‘A firestorm of outrage from women after a crude tirade from Rush Limbaugh,’ as he teased a segment on the brouhaha. Yet, accusations of babies having their heads severed — a major human rights story if there ever was one — doesn’t make the cut.”

Inducing live births and subsequently severing the heads of the babies is indeed a horrific story that merits significant attention. Strange as it seems to say it, however, that understates the case.

For this isn’t solely a story about babies having their heads severed, though it is that. It is also a story about a place where, according to the grand jury, women were sent to give birth into toilets; where a doctor casually spread gonorrhea and chlamydiae to unsuspecting women through the reuse of cheap, disposable instruments; an office where a 15-year-old administered anesthesia; an office where former workers admit to playing games when giving patients powerful narcotics; an office where white women were attended to by a doctor and black women were pawned off on clueless untrained staffers. Any single one of those things would itself make for a blockbuster news story. Is it even conceivable that an optometrist who attended to his white patients in a clean office while an intern took care of the black patients in a filthy room wouldn’t make national headlines?

But it isn’t even solely a story of a rogue clinic that’s awful in all sorts of sensational ways either. Multiple local and state agencies are implicated in an oversight failure that is epic in proportions! If I were a city editor for any Philadelphia newspaper the grand jury report would suggest a dozen major investigative projects I could undertake if I had the staff to support them. And I probably wouldn’t have the staff. But there is so much fodder for additional reporting.

There is, finally, the fact that abortion, one of the most hotly contested, polarizing debates in the country, is at the center of this case. It arguably informs the abortion debate in any number of ways, and has numerous plausible implications for abortion policy, including the oversight and regulation of clinics, the appropriateness of late-term abortions, the penalties for failing to report abuses, the statute of limitations for killings like those with which Gosnell is charged, whether staff should be legally culpable for the bad behavior of doctors under whom they work…

There’s just no end to it.

To sum up, this story has numerous elements any one of which would normally make it a major story. And setting aside conventions, which are flawed, thisought to be a big story on the merits.

The news value is undeniable.

Why isn’t it being covered more? I’ve got my theories. But rather than offer them at the end of an already lengthy item, I’d like to survey some of the editors and writers making coverage decisions.

Cracker Barrel’s Version of American History

One blog I follow is Hankering for History at http://www.hankeringforhistory.com/?wref=bif which recently included a post about “an article from The Atlantic. The article, Cracker Barrel’s Oddly Authentic Version of American History, is an informative piece about Cracker Barrel, the institution of the general store, and the importance of Cracker Barrel’s acquisition of antiques. When you stop in your local Cracker Barrel, it is impossible to miss the large collection of apparent knickknacks. However, to my disbelief, these knickknacks are authentic antiques. Here is an excerpt from the article. I suggest reading the article in its entirety.

Cracker-Barrel-Antiques

The antiques, according to [Cracker Barrel], are real ones. They come from across the U.S. to the Cracker Barrel Decor Warehouse in Lebanon, Tennessee. The company has a mock restaurant that it uses to plan the decor of every single location; designers arrange the elements for each new store in a way that looks right, make a plan (with photographs) for where the objects should go, and send it off with those objects to the new location.

The New York Times reported in 2002 that the restaurants’ demand for old objects had grown so much that American antique dealers were struggling to source them.

So maybe next time you are in a Cracker Barrel, take the opportunity to look around and check out the antiques that adorn the restaurant’s wall.

Read more: http://www.hankeringforhistory.com/#ixzz2OsST6vsQ