Guardian: Hanging Gardens of……

Guardian: Hanging Gardens of……

Babylon’s hanging garden: ancient scripts give clue to missing wonder

A British academic has gathered evidence suggesting garden was created at Nineveh, 300 miles from Babylon

Babylon’s hanging garden: ancient scripts give clue missing wonder

Stephanie Dalley pieced together ancient texts to reveal a garden that recreated a mountain landscape. Photograph: Bettmann/Corbis

The whereabouts of one of the seven wonders of the ancient world – the fabled Hanging Garden of Babylon – has been one of the great mysteries from antiquity. The inability of archaeologists to find traces of it among Babylon’s ancient remains led some even to doubt its existence.

Now a British academic has amassed a wealth of textual evidence to show that the garden was instead created at Nineveh, 300 miles from Babylon, in the early 7th century BC.

After 18 years of study, Stephanie Dalley of Oxford University has concluded that the garden was built by the Assyrians in the north of Mesopotamia – in modern Iraq – rather than by their great enemies the Babylonians in the south.

She believes her research shows that the feat of engineering and artistry was achieved by the Assyrian king, Sennacherib, rather than the Babylonian king, Nebuchadnezzar.

The evidence presented by Dalley, an expert in ancient Middle Eastern languages, emerged from deciphering Babylonian and Assyrian cuneiform scripts and reinterpreting later Greek and Roman texts. They included a 7th-century BC Assyrian inscription that, she discovered, had been mistranslated in the 1920s, reducing passages to “absolute nonsense”.

She was astonished to find Sennacherib’s own description of an “unrivalled palace” and a “wonder for all peoples”. He describes the marvel of a water-raising screw made using a new method of casting bronze – and predating the invention of Archimedes’ screw by some four centuries.

Dalley said this was part of a complex system of canals, dams and aqueducts to bring mountain water from streams 50 miles away to the citadel of Nineveh and the hanging garden. The script records water being drawn up “all day”.

Location of the 'Hanging Gardens'

Recent excavations have found traces of aqueducts. One near Nineveh was so vast that Dalley said its remains looked like a stretch of motorway from the air, and it bore a crucial inscription: “Sennacherib king of the world … Over a great distance I had a watercourse directed to the environs of Nineveh …”

Having first broached her theory in 1992, Dalley is now presenting a mass of evidence in a book, The Mystery of the Hanging Garden of Babylon, which Oxford University Press publishes on 23 May. She expects to divide academic opinion, but the evidence convinces her that Sennacherib’s garden fulfils the criteria for a wonder of the world – “magnificent in conception, spectacular in engineering, and brilliant in artistry”.

Dalley said: “That the Hanging Garden was built in Babylon by Nebuchadnezzar the Great is a fact learned at school and … ‘verified’ in encyclopaedias … To challenge such a universally accepted truth might seem the height of arrogance, revisionist scholarship … But Assyriology is a relatively recent discipline … Facts that once seemed secure become redundant.”

Sennacherib’s palace, with steps of semi-precious stone and an entrance guarded by colossal copper lions, was magnificent. Dalley pieced together ancient texts to reveal a garden that recreated a mountain landscape. It boasted terraces, pillared walkways, exotic plants and trees, and rippling streams.

The seven wonders appear in classical texts written centuries after the garden was created, but the 1st-century historian Josephus was the only author to name Nebuchadnezzar as creator of the Hanging Garden, Dalley said. She found extensive confusion over names and places in ancient texts, including the Book of Judith, muddling the two kings.

Little of Nineveh – near present-day Mosul – has so far been explored, because it has been judged too dangerous until now to conduct excavations.

New Testament Documents – Authorship and Dates

New Testament Documents – Authorship and Dates

Expository Files 20.5 – May 2013

New Testament Documents – Date & Authorship — By Steve Wolfgang

“In my opinion, every book of the New Testament was written by a baptized Jew between the forties and the eighties of the first century (very probably sometime between about A.D. 50 and 75).” – William F. Albright, Johns Hopkins University (1963, p.4)

Almost a half-century ago, when I first began to think seriously about various controversies over the dating and authorship of New Testament documents, one of the first things I encountered was this then-newly-minted comment by one of the world’s leading archaeologists, William F. Albright. While that comment was made a few years before his death in an interview in the evangelical magazine, Christianity Today, it was by no means a spur-of-the moment interjection common in interviews. Albright had previously written, in light of archaeological discoveries (his area of scholarly expertise), that “[t]hanks to the Qumran discoveries [the Dead Sea Scrolls], the New Testament proves to be in fact what it was formerly believed to be: the teaching of Christ and his immediate followers between circa 25 and circa 80 A.D” (Albright, 1957, p. 23).

What I have learned since encountering Albright’s comment has only caused me to see more clearly why this accomplished archaeologist said what he did. Interestingly, Albright’s assessment is not unique among unlikely sources of such assessments. Possibly the most unlikely source is the staunch atheist and eugenics advocate H.G. Wells (unfortunately much more widely known and read than Albright), who also acknowledged that the four gospels “were certainly in existence a few decades after [Christ’s] death” (498). Unless one reads documents through the lens of a apriori assumptions, the evidence supports the conclusions that the historical accounts, letters, biography, and other genres found in the New Testament were written by eyewitnesses and other persons living in that historical period with access to written sources and persons knowledgeable about the events

Expository Files May 2013    25

described. The New Testament is not the stuff of mythology or fiction, as the early and wide accessibility of the documents attest.

Background: Various Theories and Proposals

Obviously, the dates and time frames for the authorship of the various documents are significant issues in an apologetic argument for Christianity. Confidence in the historical accuracy of these documents depends partly on whether they were written by eyewitnesses and contemporaries to the events described, as many New Testament texts claim. Some critical scholars have attempted to strengthen their contentions by separating the actual events from the writings by as much time as possible. For this reason radical scholars (for example, the “Jesus Seminar”) argue for late first century or even second century dates for the original manuscripts. Invoking these dates barely opens the door to argue that the New Testament documents, especially the Gospels, are “mythological” and that the writers created the events contained in them, rather than simply reporting them. As Oxford historian A.N. Sherwin White has demonstrated, using documents from antiquity even less well-attested and with much wider composition-to-earliest-copy spans than the New Testament documents, “even two generations are too short a span to allow the mythical tendency to prevail over the hard historic core of the oral tradition” (Sherwin-White, 190).

In the 19th century, Ferdinand Christian Baur (1792–1860), founder of the “Tubingen School” of theology, maintained that the majority of the New Testament documents were pseudonymous works and gave little weight to the evidence of numerous citations provided by the early Christian writers (commonly known as “church fathers”). Proposing that the New Testament documents were written within a frame of perhaps120 years, his suggested dates ranged from ca. 50–60AD for Paul’s genuine letters (i.e., Romans, 1–2 Corinthians, Galatians) to about 170AD for the Gospel and letters of John. Baur’s proposal was remained influential for later attempts to date and identify authorship of the New Testament documents (Harris, 237, 248–62; Ellis, Appendix VI).

Expository Files May 2013    26

More recent dating proposals have reflected the impact, among both “liberal” and “conservative” scholars, of various lines of evidence which indicate earlier dates for the New Testament documents. For example, the notorious “death-of-God” proponent John A.T. Robinson (1976) contended that all 27 documents were composed prior to 70AD. He proposed a compositional span of approximately 20 years: from about 47–48AD (Galatians) to late 68–70AD (Revelation; Redating, 352). He mainly based his argument on the fact that the New Testament documents do not reference the fall of Jerusalem (70AD; Redating, 13–30).

More recently, influential Roman Catholic scholar Raymond E. Brown (1997) proposed a date range for the New Testament documents that spanned approximately 80 years: from 50–51AD for 1 Thessalonians, to 130AD for 2 Peter – although favoring a first-century date for almost all documents other than 2 Peter and 2 John (Introduction, 396, 457, 762).

Evangelical scholar E. Earle Ellis (1995), reflecting views accepted and espoused by many “conservative” Biblical writers, has proposed that the New Testament documents were the result of four streams of apostolic sources: Peter, James, John, and Paul. He dated all the New Testament documents within the first century: 49AD (for Galatians) to 85–95AD (Gospel of John), with the majority of the documents dated to the 50s and 60s (Making, 319), considering 70AD key for setting the upper limit dating for a majority of the New Testament documents.

Outer Limits – Manuscript Evidence and Quotations in early Christian Writers

The speculative efforts various negative critical scholars to “late-date” various New Testament documents are confronted by some “stubborn facts.” For example, every New Testament book is quoted by the “Apostolic Fathers” (as the early Christian writers down to 150AD are commonly known). Almost every book of the New Testament is explicitly cited as Scripture by these early writers. By around 300, nearly every verse in the New Testament was cited in one or more of over 36,000 citations found in the writings of the Church Fathers (Geisler and Nix 108, 155). The distribution of those writings are important evidence because of their early date, the wide geographic distribution of where these authors lived, where their recipients lived, and the large number of New Testament references

Expository Files May 2013    27

they contain. Evidence from these early Christian writers is explored in greater detail in other articles in this series, providing external evidence that from the beginning, churches and Christians recognized the authority of the apostolic writings which were soon disseminated and widely known.

Given the amount and early dates of these extensive quotations of the New Testament documents, it is impossible to argue seriously for the sort of “late-dating” and alleged pseudonymous composition of the documents composing the corpus of the New Testament. This stream of evidence is, of course, in addition to the various manuscript copies in Greek (to say nothing of early translations) of the New Testament documents.

Among these are the John Rylands papyri (p52), the earliest undisputed manuscript of a New Testament book, dated from 117 to 138AD. This fragment of John’s Gospel survives from within a generation of composition. Furthermore, inasmuch as the book was composed in Asia Minor while this fragment was found in Egypt, some circulation time is demanded, which surely places the composition of John within the first century. Entire books (Bodmer Papyri) are available from about 200AD. The Chester Beatty Papyri, from 150 years after the New Testament was finished (ca. 250), include all the Gospels and most of the New Testament. It is beyond dispute that no other book from the ancient world has as small a time span between composition and earliest manuscript copies, as does the New Testament.

Indeed, as has often been noted by many who have spent their lives pondering ancient evidence pertaining to the Scripture, “No work from Graeco-Roman antiquity is so well attested by manuscript tradition as the New Testament. There are many more manuscripts of the New Testament than there are of any classical author, and the oldest extensive remains of it date only about two centuries after their original composition” (Albright 1971, 238). Those who would question the integrity of the New Testament texts, by the same token destroy confidence in the integrity of any ancient document which has been handed down through the copying process.

Specific Instances and Particulars

While it is not possible in this short article to include a detailed explication of the date and authorship of every New Testament book, some samples will have to

Expository Files May 2013    28

suffice for the present. As these articles are expanded and collected for publication in book form, more details may be added to what originally appears here.

Luke and Acts. The Gospel of Luke was written by the same author as the Acts of the Apostles, who refers to Luke as the “former account” of “all that Jesus began to do and teach” (Acts 1:1). The style, vocabulary and recipient (Theophilus) of the two books betray a common author. Roman historian Colin Hemer has provided powerful evidence that Acts was written between 60AD and 62AD. This evidence includes these observations: There is no mention in Acts of the crucial event of the fall of Jerusalem in 70, or of the outbreak of the Jewish War in 66, and no hint of serious deterioration of relations between Romans and Jews before that time, nor of the deterioration of Christian relations with Rome during the Neronian persecution of the late 60s. There is no mention of the death of James at the hands of the Sanhedrin in ca. 62, as recorded by Josephus in Antiquities of the Jews (20.9.1.200). Controversies described in Acts presume that the Temple was still standing; and the relative prominence and authority of the Sadducees in Acts reflects a pre-70 date, before the collapse of their political cooperation with Rome. Likewise, the prominence of “God-fearers” in the synagogues may point to a pre-70 date, after which there were few Gentile inquirers and converts to Judaism. Additionally, the confident “tone” of Acts seems unlikely during the Neronian persecution of Christians and the Jewish War with Rome during the late 60s.

If Acts was written in 62 or before, and the gospel according to Luke was written before Acts (possibly 60AD or even before), then Luke was written only about thirty years after the death of Jesus. This is obviously contemporary to the generation who witnessed the events of Jesus’ life, death, and resurrection – which precisely what Luke claims in the prologue to his Gospel:

Many have undertaken to draw up an account of the things that have been fulfilled among us, just as they were handed down to us by those who from the first were eye-witnesses and servants of the word. Therefore, since I myself have carefully investigated everything from the beginning, it seemed good also to me to write an orderly account for you, most excellent

Expository Files May 2013    29

Theophilus, so that you may know the certainty of the things you have been taught. [Luke 1:1–4]

While so far considering only Luke of the four gospels (due to Luke’s authorship of Acts as well), it is commonly accepted as factual by the preponderance of those who have examined the evidence in detail – whether “conservative” or “liberal” scholars – that other gospels, particularly Mark, were committed to writing even earlier. In all such deliberations, it is good to remember that, for believers, there is in reality one gospel, recalled and recorded by four different evangelists (“according to Matthew” etc.) as each was empowered by the Spirit to remember and reveal what God wishes for us to know, expressed as the Spirit moved them to do so.

First Corinthians. It is widely accepted by many “critical” and “conservative” scholars alike that 1 Corinthians was written by 55 or 56 – less than a quarter century after the crucifixion. Further, Paul speaks of most of a collection of 500 eyewitnesses to the resurrection who were still alive when he wrote (15:6) – including the apostles and James the brother of Jesus. Internal evidence is strong for this early date: the book repeatedly claims to be written by Paul (1:1, 12–17; 3:4, 6, 22; 16:21); there are significant parallels with the book of Acts; the contents harmonize with what has been learned about Corinth during that era.

There also is external evidence: Clement of Rome refers to it in his own Epistle to the Corinthians (chap. 47), as does The Epistle of Barnabas (allusion, chapter 4) and the Shepherd of Hermas (chapter 4). Furthermore, there are nearly 600 quotations of 1 Corinthians in Irenaeus, Clement of Alexandria, and Tertullian alone. It is one of the best attested books of any kind from the ancient world.

2 Corinthians and Galatians, along with 1 Corinthians, are well attested and early. All three reveal a historical interest in the events of Jesus’ life and give facts that agree with the Gospels. Paul speaks of Jesus’ virgin birth (Gal. 4:4), sinless life (2 Cor. 5:21), death on the cross (1 Cor. 15:3; Gal. 3:13); resurrection on the third day (1 Cor. 15:4), and post-resurrection appearances (1 Cor. 15:5–8). He mentions the hundreds of eyewitnesses who could verify the resurrection (1 Cor. 15:6), grounding the truth of Christianity on the historicity of the resurrection (1 Cor. 15:12–19). Paul also gives historical details about Jesus’

Expository Files May 2013    30

contemporaries, the apostles (1 Cor. 15:5–8), including his private encounters with Peter and the apostles (Gal. 1:18–2:14). Persons, places, and events relating to Christ’s birth are described as historical. Luke goes to great pains to note that Jesus was born during the days of Caesar Augustus (Luke 2:1) and was baptized in the fifteenth year of Tiberius. Pontius Pilate was governor of Judea and Herod was tetrarch of Galilee. Annas and Caiaphas were high priests (Luke 3:1–2).

New Testament authors write with a clear sense of historical perspective (see Gal 4:4; Heb 1:1–2). They wrote against the historical backdrop of a Mediterranean world immersed in Greco-Roman culture and ruled by Rome and Roman officials known from non-Biblical sources (though those sources are significantly less-well attested than the New Testament documents. While the authors of the New Testament documents do include important figures, places, and events, they do not demonstrate an interest in precise chronological detail. As a result, many of their references to historical realities were more of an incidental nature. And, as is common in historical writing, they use various sources, make various choices about what evidence to incorporate or omit, and arrange their evidence to tell the story they wish to record. That is what historians do, after all.

Antilegomena: Disputed Documents

The basic principle of whether a document was recognized as legitimately belonging to the New Covenant scriptures was its apostolic “pedigree” – was it of apostolic (or prophetic) origin, and thus revelation from God? Because of some questions about the authorship or apostolic origin of seven documents (Hebrews, James, 2 Peter, 2-3 John, Jude, and Revelation) were sometimes challenged by various early Christians. Sometime referred to as antilegomena (“spoken against”), the very challenges these documents faced demonstrate ever more strongly that the ultimately test for whether these documents were recognized as divine revelation was: are they apostolic? Since Hebrews and 2-3 John are without authorial attribution, it is quite understandable that some might at first question their apostolic origin. Given the early martyrdom of James, the brother of John, understandable questions arose regarding the authorship of the epistle of James. The Apocalypse (Revelation) came under question later due to its wide usage by numerous heretics.

Expository Files May 2013    31

Even 2 Peter, the most contested of the New Testament epistles, provides a benchmark of sorts for the standards necessary for a document to be recognized as the word of God. 2 Peter was questioned due to stylistic and vocabulary differences (it has the largest number of hapax legomena or unique words of any New Testament document) as well as parallels with the epistle of Jude. But as E.M.B Green points out, arguing on the basis of Westcott’s work, 2 Peter “has incomparably better support for its inclusion than the best attested of the rejected books” (p.5). Kostenberger and Kruger (73, 153-155) challenge modern examples of early and later documents unfairly grouped together, as though both are of equally legitimacy, by modern authors with their own agendas.

Kruger (645), among other conservative scholars, challenges the common notion that 2 Peter is non-apostolic, contending that “the case for its pseudonymity is simply too incomplete and insufficient to warrant the dogmatic conclusions issued by much of modern scholarship. Although 2 Peter has various difficulties that are still being explored, we have no reason to doubt the epistle’s own claims in regard to authorship.” A good discussion of many of these disputations is in Harrison (416-428).

Conclusion

Jesus Christ himself is obviously the center and circumference of the New Testament documents which record his life and works. The gospels present themselves to readers as calm and rational expositors of historical facts. Nearly all we know about Jesus comes from these source materials, written by those who had personal knowledge of the events they describe or their sources who had such firsthand, eyewitness knowledge. They record the claims of Jesus, but also indicate that he intended for this knowledge to be disseminated not by himself, but rather by men he selected and approved to carry his message to the world (John 16:13-14, 20:21-23; Matthew 10:20, 16:19, 18:18; Luke 22:30. That these appointed messengers did so effectively is attested by the widespread documentation, within a generation or two of the events themselves, of that proclamation, written down for succeeding generations to read and receive with confidence in their accuracy and veracity.

Expository Files May 2013    32

Sources:

Albright, William F., From the Stone Age to Christianity (2nd ed; New York: Anchor Books, 1957).

__________. The Archaeology of Palestine. Reprint; Gloucester MA: Peter Smith, 1971.

__________. “Toward a More Conservative View.” Christianity Today, January 18, 1963, p.4.

Brown, Raymond E. An Introduction to the New Testament. New York: Doubleday, 1997.

Bruce, F.F., J.I Packer, Philip Comfort, and Carl F.H. Henry, eds. The Origin of the Bible. Carol Stream, IL: Tyndale House, 2003.

Carson, D.A., and Douglas Moo. An Introduction to the New Testament. 2nd ed. Grand Rapids, MI: Zondervan, 2005.

Ellis, E Earle. The Making of the New Testament Documents. Leiden: Brill, 1999.

Geisler, Norman L., and William E. Nix, From God To Us Revised and Expanded: How We Got Our Bible. 2nd ed.; Chicago: Moody Press, 2012.

Green, E.M.B. 2 Peter Reconsidered. London: Tyndale, 1961.

Harris, Horton. The Tübingen School: A Historical and Theological Investigation of the School of F.C. Baur. Oxford: Clarendon Press, 1975.

Harrison, Everett F., Introduction to the New Testament. Grand Rapids, MI: Zondervan, 1971.

Hemer, Colin J. The Book of Acts in the Setting of Hellenistic History. Ed. Conrad H. Gempf. Winona Lake, IN: Eisenbrauns, 1990.

Kostenberger, Andreas J., and Michael J. Kruger. The Heresy of Orthodoxy. Wheaton, IL: Crossway, 2010.

Kruger, Michael J. “The Authenticity of 2 Peter,” Journal of the Evangelical Theological Society 42.4 (1999), 645-671.

Longenecker, Richard N. “On the Form, Function, and Authority of the New Testament Letters.” In D.A. Carson and John D. Woodbridge, Scripture and Truth. Grand Rapids, MI: Zondervan, 1983, 101-114.

Robinson, John A.T. Redating the New Testament. Philadelphia: Westminster, 1976.

Sherwin-White, Adrian Nicholas. Roman Society and Roman Law in the New Testament. Oxford: Oxford University Press, 1963.

Wells, H.G. The Outline of History (Garden City, NY: Garden City Publishing, 1921).

Recent Addendum to “Innocent Man” – Texas Monthly

Recent Addendum to “Innocent Man” – Texas Monthly

Judge: Prosecutor in Morton Case Deliberately Concealed Evidence

ARREST WARRANT IS ISSUED FOR FORMER WILLIAMSON COUNTY DISTRICT ATTORNEY KEN ANDERSON, THE MAN WHO PROSECUTED MICHAEL MORTON AND HELPED PUT HIM IN PRISON FOR NEARLY 25 YEARS FOR A CRIME HE DIDN’T COMMIT.
by PAMELA COLLOFF  —  FRI APRIL 19, 2013 2:15 PM
Surrounded by his five attorneys, Judge Ken Anderson leaves the court of inquiry after Judge Louis Sturns issued a warrant for his arrest on April 19.
Bob Daemmrich

This afternoon, Michael Morton received a long-awaited measure of justice when the inquiry into alleged misconduct in the 1987 trial that resulted in his wrongful conviction ended with a stinging rebuke to the man who prosecuted him. State district Judge Louis Sturns, who presided over the court of inquiry, ruled that Ken Anderson—the former D.A. of Williamson County who prosecuted Michael—should face criminal charges for his conduct. Though Anderson has denied any wrongdoing, Sturns found that Anderson lied when he assured the trial judge that he had no evidence in his possession that was favorable to the accused. This was a deliberate “concealment of evidence,” Sturns said, which was “intended to defraud the court” and win a conviction. Sturns stated his belief that Anderson committed a felony by doing so.

In the end, Sturns found that Anderson committed criminal contempt of court, and he issued a warrant for an arrest. Anderson—a sitting district judge—left the courtroom with his lawyers, walking past the courtroom where he currently presides, to be booked into the Williamson County jail. (He will be released this afternoon on $2,500 bond.) The case will now be referred to a grand jury, which can issue an indictment. Anderson, who faces reelection next year, could face jail time if he is found guilty.

As long as 26 years ago, Michael’s lead trial attorney, Bill Allison, suspected that Anderson had not turned over all of the investigators’ reports in the case to the defense. As I wrote in the second half of my two-part series on the case, Anderson and Allison had repeatedly battled over this issue:

During two pretrial hearings, the lawyers had clashed over what evidence the state should, or should not, have to turn over. As Allison remembered it, state district judge William Lott had ordered Anderson to provide him with all of Wood’s reports and notes before the trial so he could determine whether they contained any “Brady material.” (The term refers to the landmark 1963 U.S. Supreme Court ruling in Brady v. Maryland, which holds that prosecutors are required to turn over any evidence that is favorable to the accused. Failure to do so is considered to be a “Brady violation,” or a breach of a defendant’s constitutional right to due process.)

Judge Lott had examined everything Anderson had given him and ruled that no Brady material was present. Afterward, as is the protocol in such a situation, the judge had placed the papers in a sealed file that could be opened only by the appellate courts to review at a later date. Thinking back on that series of events, Allison had a terrible thought: What if Anderson had not, in fact, given Lott all of [Sgt. Don] Wood’s reports and notes?

Allison’s suspicion was based on two peculiarities of the trial. First, Anderson had not called Wood to the stand to testify. This was highly unusual, given that Wood was the case’s lead investigator. Allison had also overheard a conversation at the end of the trial, which I described in the first half of my Morton story, when he lingered in the courtroom after the verdict was read:

Both he and prosecutor Mike Davis, who had assisted Anderson during the trial, stayed behind to ask the jurors about their views of the case. It was during their discussions in the jury room that Allison says he overheard Davis make an astonishing statement, telling several jurors that if Michael’s attorneys had been able to obtain Wood’s reports, they could have raised more doubt than they did. (Davis has said under oath that he has no recollection of making such a statement.) What, Allison wondered, was in Wood’s reports?

In fact, many details in Wood’s reports supported the idea that Christine Morton had been killed by an unknown intruder. The most significant document that the defense never saw was an eight-page transcript of a phone call that had taken place between Wood and Christine’s mother, Rita Kirkpatrick, less than two weeks after Christine’s murder. During this phone conversation, Kirkpatrick told Wood that the Mortons’ three-year-old son, Eric—who was home at the time of the murder—had reported seeing a “monster” kill his mother. He also said that his father had not been home when the crime occurred. Many of the details that Eric gave to his grandmother—such as the fact that the perpetrator threw a blue suitcase on the bed after he killed Christine—dovetailed perfectly with the crime scene.

That a former D.A.—much less a sitting district judge—will be held to account for alleged prosecutorial misconduct is extraordinarily unusual, if not unprecedented. Now, based on Judge Sturns’s ruling, Anderson is being held accountable for the decisions he made more than a quarter-century ago which sent Michael to prison for a crime he did not commit.

Read This. You will want to read Part 2.

Read This. You will want to read Part 2.

The Innocent Man, Part One

ON AUGUST 13, 1986, MICHAEL MORTON CAME HOME FROM WORK TO DISCOVER THAT HIS WIFE HAD BEEN BRUTALLY MURDERED IN THEIR BED. HIS NIGHTMARE HAD ONLY BEGUN.
Photograph by Williamson County Sun

Editors’ note: This is part one of a two-part story. The second half can be found here.

Editors’ note: This is the second part of a two-part story.

The first half was published in the November issue and can be read here.

Lincoln: Dark Lord of the Sith?

Lincoln: Dark Lord of the Sith?

Defending Lincoln against Misguided Libertarians

By  — May 2, 2013

I’m usually pretty open to the kind of work that the libertarian Mises Institute puts out (and I strongly recommend the pdfs of classic books by famous Austrian economists that they make available for free). But yesterday, I stumbled across an article featured on their homepage that struck me as truly outrageous and draws attention to the dangerous contradictions within radical libertarianism.

Professor Thomas DiLorenzo, who will apparently be offering an onlineMises Academy course titled “Lincoln: Founding Father of the American Leviathan State” later this spring, argues that Lincoln misunderstood the Declaration of Independence and is largely responsible for our obsession with revolutions on behalf of equality. DiLorenzo favorably cites an essay by Mel Bradford, who blames Lincoln for America’s more interventionist foreign policy after 1861 and takes issue with Lincoln’s “rhetoric for continuing revolution.”

As DiLorenzo explains:

Professor Bradford was referring to the way in which Lincoln used the “all men are created equal” phrase from the Declaration and reinterpreted it to have meant that it was somehow the duty of Americans to stamp out all sin in the world, wherever it may be found, so that ALL MEN EVERYWHERE could supposedly share in equal freedom. Hence the “rhetoric of continuing revolution” is a rhetorical recipe for perpetual war for perpetual “freedom” everywhere in the world.” It was cemented into place as the new cornerstone of American policy thanks to the deification of Lincoln after his death, which in turn led to the virtual deification of the presidency, and of government in general. The modern-day rhetoric of “American exceptionalism” is just the latest expression of Lincoln’s rhetoric of continuing revolution.

No, using the language of the Declaration does not necessitate stamping out all sin in the world. Indeed, using politics on behalf of a perfectionist worldview is deeply contrary to the conservative interpretation of human nature. We, like many moderate libertarians and classical liberals, believe that mankind is fallen and imperfect and that political mechanisms should be put in place to restrict leaders whose natural impulse is to amass more power. We pursue government by and for the people because the history of man shows that ordered liberty tends to be the best model for good governance and morals. The Civil War wasn’t waged to eradicate all sin. Just one specific sin that was utterly contrary to the Declaration and Constitution—namely, slavery.

Now, on one hand, I can see where DiLorenzo is coming from when he criticizes American neoconservatives who have manipulated the language of Lincoln on behalf of “spreading democracy” across the globe or some other Wilsonian vision that gets us into military trouble. On the other hand, that’s hardly Lincoln’s fault.

I would make the case that the rise of American military intervention after the Civil War was the product of American industrialization and, if anything, Progressive rhetoric by later presidents like Woodrow Wilson and Teddy Roosevelt. Blaming Lincoln for these later historical developments is rather preposterous, since, unlike subsequent progressive leaders, Lincoln took America’s founding documents quite seriously.

The Gettysburg Address is not, as DiLorenzo characterizes it, a rallying cry for ongoing revolution. Anyone who has studied an ounce of 19thcentury American history knows how deeply conflicted Lincoln was about the Civil War. Lincoln rightly justified the War not because he secretly harbored a revolutionary zeal but because America needed to return to the ideals of the Founding and “to the unfinished work which they who fought here have thus far so nobly advanced.”

Lincoln was not appropriating the Declaration for his own purposes. On the contrary, he was reminding his fellow Americans of the principles on which the United States was founded and arguing that slavery was incompatible with those guiding principles. Both the Declaration and Gettysburg Address were aimed at the same audience: Americans who care about their shared and unique history, respect ordered liberty, and wish to uphold those values.  Lincoln wasn’t using these values to argue for foreign military interventionism, since, as perhaps DiLorenzo needs reminding, the Civil War was a “war between the states.” The Gettysburg Address was not some generic speech that could be delivered anywhere in the world to inspire conflict. Rather, it is a deeply and distinctly and American document that reminds us of our historical obligations. And as Edmund Burke teaches us, a regard for the wisdom of history and an effort to preserve the teachings of our ancestors is exactly the opposite of an irreverent relish for revolution.

DiLorenzo goes on to misalign himself with one of my favorite Southern authors, Robert Penn Warren (For my previous post on Warren, see here.) It’s true that Warren had something of an antipathy for self-righteous Northerners, but, in my reading of Warren, he was worried that non-Southerners after the Civil War were estranged from history. He was no apologist for slavery. Rather, Warren feared that without a visceral understanding of the War’s tragedy, which Southerners maintained, Americans would become overly utopian in their political aims. Misreading the capacity of human sin and history’s hold on human communities, Northeasterners like Ralph Waldo Emerson and his post-War descendents could float into a “total abstraction, in the pure blinding light of total isolation.” American idealists attempt to overcome human pain yet end up completely abstracted from real human life and its demands. As a result, Emerson has no practical relevance to the physical, historical reality of life on the ground, where Americans are forced to confront their own lust, dreams, sins and family past. Warren opens a nuanced conversation over whether or not American exceptionalism is valid and what it means for our relationship with history—but that’s an entirely different conversation than the anti-Lincoln debate DiLorenzo tries to inspire.

DiLorenzo cheekily refers to the Civil War as the “War to Prevent Southern Independence” because, like Murray Rothbard, he is suspicious of any and all uses of government force that impede liberty. War, says DiLorenzo, “is invariably waged over some hidden economic agenda for the benefit of the politically-connected class.”

But do you know what else was a racket to benefit a politically-connected class? Slavery. Do you know what else was an impediment to liberty and the spirit of the Declaration? Again, slavery. Ignoring this historical reality in the name of libertarian purity is precisely the kind of abstract idealism that Robert Penn Warren abhorred.

Scientific American – 20th Anniversary of Free www

20th Anniversary of Free www

The World Wide Web Became Free 20 Years Ago Today

By Mark Fischetti | April 30, 2013

You and I can access billions of Web pages, post blogs, write code for our own killer apps—in short, do anything we want on the Web—all for free! And we’ve enjoyed free reign because 20 years ago, today, Web inventor Tim Berners-Lee and his employer, the CERN physics lab in Geneva, published a statement that made the nascent “World Wide Web” technology available to every person, company and institution with no royalty or restriction.

Berners-Lee proposed the Web in 1989 and had a working version in Dec 1990. But by 1993 certain user groups were positioning themselves to try to monopolize the Web as a commercial product. Chief among them was the National Center for Supercomputing Applications at the University of Illinois, which had developed a browser called Mosaic that would later become Netscape. So Berners-Lee and CERN decided to release the code for the Web, believing that software development by hundreds of Web enthusiasts at the time, and millions of people in the future, would always stay one step ahead of any company that tried to control the Web or force people to pay to use it. The decision came at a very tense time that could have ruined the Web’s primary goal as a ubiquitous, open communications platform.

You can read the full back-story in a book that Berners-Lee and I wrote in 1999,Weaving the Web. As Tim explains in the book, when early Web enthusiasts gathered at technical meetings in 1993, “I was accosted in the corridors…. I listened carefully to people’s concerns. I also sweated anxiously behind my calm exterior…. On April 30, Robert [Cailliau] and I received a declaration, with a CERN stamp on it, signed by one of the directors, saying that CERN  agreed to allow anybody to use the Web protocol and code free of charge, to create a server or browser, to give it away or sell it, without any royalty or other constraint. Whew!”

With that single step, the Web exploded across the universe. Other information systems that used the Internet, such as Gopher and WAIS, soon faded into the Web’s wake. And no company, not even Microsoft, has ever been able to out-develop the masses.

To celebrate the anniversary, CERN has posted the declaration it sent to Berners-Lee. It is also showing off original Web technology, on a page that has a photo of Berners-Lee from the early days and the NeXT computer he programmed the Web on. The site links tothe written pitch Berners-Lee made to CERN, simply titled “Information Management: A Proposal,” for internal funding so he could develop a “wide-area hypermedia information retrieval” system (I’ve shown a small image of the cover, left). Another CERN page shows a copy of the world’s first Web site, which was about the WWW project itself. Berners-Lee also wrote a treatise for Scientific American in 2010 explaining why the Web must forever remain free and how to make sure it stays that way.

Image of the original Web proposal, courtesy of CERN

Mark FischettiAbout the Author: Mark Fischetti is a senior editor at Scientific American who covers energy, environment and sustainability issues. Follow on Twitter @markfischetti.The views expressed are those of the author and are not necessarily those of Scientific American.

Battlefields of the Civil War-An awesome interactive map tool

Very cool Civil War map app!

Daniel Sauerwein's avatarCivil War History

Hat tip to my good friend Dr. Laura Munski, who shared this interesting site, created by ESRI, who produces the software ArcGIS, which is used for GIS, cartography, and many other uses. They also have a series of sites, called Story Maps, which all look interesting (yes, I am into geography as well as history).

The Story Map on the Civil War is quite interesting, as it highlights battles, in chronological order, offers the user the chance to narrow the range, and, it animates the battle sites on the base map. One great feature is the linking to the battle sites through the Civil War Trust, who links to this site. Civil War Trust is a pretty cool site for learning about the war, and battlefield preservation. It also has a page for smartphone apps (if you are able to enjoy that technology).

If you have…

View original post 32 more words

The Revolutionary Effect of the Paperback Book – Smithsonian

The Revolutionary Effect of the Paperback Book – Smithsonian

The Revolutionary Effect of the Paperback Book

This simple innovation transformed the reading habits of an entire nation

By Clive Thompson

  • Illustration by Alanna Cavanagh
  • Smithsonian magazine, May 2013

paperbacks

30 is the number of trees, in millions, cut down annually to produce books in the U.S. (Alanna Cavanagh)

The iPhone became the world’s best-selling smartphone partly because Steve Jobs was obsessed with the ergonomics of everyday life. If you want people to carry a computer, it had to hit the “sweet spot” where it was big enough to display “detailed, legible graphics, but small enough to fit comfortably in the hand and pocket.”

Seventy-five years ago, another American innovator had the same epiphany: Robert Fair de Graff realized he could change the way people read by making books radically smaller. Back then, it was surprisingly hard for ordinary Americans to get good novels and nonfiction. The country only had about 500 bookstores, all clustered in the biggest 12 cities, and hardcovers cost $2.50 (about $40 in today’s currency).

De Graff revolutionized that market when he got backing from Simon & Schuster to launch Pocket Books in May 1939. A petite 4 by 6 inches and priced at a mere 25 cents, the Pocket Book changed everything about who could read and where. Suddenly people read all the time, much as we now peek at e-mail and Twitter on our phones. And by working with the often gangster-riddled magazine-distribution industry, De Graff sold books where they had never been available before—grocery stores, drugstores and airport terminals. Within two years he’d sold 17 million.

“They literally couldn’t keep up with demand,” says historian Kenneth C. Davis, who documented De Graff’s triumph in his book Two-Bit Culture. “They tapped into a huge reservoir of Americans who nobody realized wanted to read.”

Other publishers rushed into the business. And, like all forms of new media, pocket-size books panicked the elites. Sure, some books were quality literature, but the biggest sellers were mysteries, westerns, thinly veiled smut—a potential “flood of trash” that threatened to “debase farther the popular taste,” as the social critic Harvey Swados worried. But the tumult also gave birth to new and distinctly American literary genres, from Mickey Spillane’s gritty detective stories to Ray Bradbury’s cerebral science fiction.

The financial success of the paperback became its cultural downfall. Media conglomerates bought the upstart pocket-book firms and began hiking prices and chasing after quick-money best-sellers, including jokey fare like 101 Uses for a Dead Cat. And while paperbacks remain commonplace, they’re no longer dizzingly cheaper than hardcovers.

Instead, there’s a new reading format that’s shifting the terrain. Mini-tablets and e-readers not only fit in your pocket; they allow your entire library to fit in your pocket. And, as with De Graff’s invention, e-readers are producing new forms, prices and publishers.

The upshot, says Mike Shatzkin—CEO of the Idea Logical Company, a consultancy for publishers—is that “more reading is taking place,” as we tuck it into ever more stray moments. But he also worries that as e-book consumers shift more to multifunctional tablets, reading might take a back seat to other portable entertainment: more “Angry Birds,” less Jennifer Egan. Still, whatever the outcome, the true revolution in portable publishing began not with e-books but with De Graff, whose paperback made reading into an activity that travels everywhere.

Read more: http://www.smithsonianmag.com/arts-culture/The-Revolutionary-Effect-of-the-Paperback-Book-204113211.html#ixzz2Rwkcwfvm

The 20th Century’s Greatest 19th-century Statesman – Robert D. Kaplan in The Atlantic

The 20th Century’s Greatest 19th-century Statesman – Robert D. Kaplan in The Atlantic

Whatever one may think of Kissinger, anyone who pretends to an understanding of global geo-political developments should ponder this article from The Atlantic,May 2013.

In Defense of Henry Kissinger

HE WAS THE 20TH CENTURY’S GREATEST 19TH-CENTURY STATESMAN.

By Robert D. Kaplan

  —  APRIL 24 2013, 9:58 PM ET

In the summer of 2002, during the initial buildup to the invasion of Iraq, which he supported, Henry Kissinger told me he was nevertheless concerned about the lack of critical thinking and planning for the occupation of a Middle Eastern country where, as he put it, “normal politics have not been practiced for decades, and where new power struggles would therefore have to be very violent.” Thus is pessimism morally superior to misplaced optimism.

I have been a close friend of Henry Kissinger’s for some time, but my relationship with him as a historical figure began decades ago. When I was growing up, the received wisdom painted him as the ogre of Vietnam. Later, as I experienced firsthand the stubborn realities of the developing world, and came to understand the task that a liberal polity like the United States faced in protecting its interests, Kissinger took his place among the other political philosophers whose books I consulted to make sense of it all. In the 1980s, when I was traveling through Central Europe and the Balkans, I encountered A World Restored, Kissinger’s first book, published in 1957, about the diplomatic aftermath of the Napoleonic Wars. In that book, he laid out the significance of Austria as a “polyglot Empire [that] could never be part of a structure legitimized by nationalism,” and he offered a telling truth about Greece, where I had been living for most of the decade: whatever attraction the war for Greek independence had held for the literati of the 1820s, it was not born of “a revolution of middle-class origin to achieve political liberty,” he cautioned, “but a national movement with a religious basis.”

When policy makers disparage Kissinger in private, they tend to do so in a manner that reveals how much they measure themselves against him. The former secretary of state turns 90 this month. To mark his legacy, we need to begin in the 19th century.

In August of 1822, Britain’s radical intelligentsia openly rejoiced upon hearing the news of Robert Stewart’s suicide. Lord Byron, the Romantic poet and heroic adventurer, described Stewart, better known as Viscount Castlereagh, as a “cold-blooded, … placid miscreant.” Castlereagh, the British foreign secretary from 1812 to 1822, had helped organize the military coalition that defeated Napoleon and afterward helped negotiate a peace settlement that kept Europe free of large-scale violence for decades. But because the settlement restored the Bourbon dynasty in France, while providing the forces of Liberalism little reward for their efforts, Castlereagh’s accomplishment lacked any idealistic element, without which the radicals could not be mollified. Of course, this very lack of idealism, by safeguarding the aristocratic order, provided various sovereigns with the only point on which they could unite against Napoleon and establish a continent-wide peace—a peace, it should be noted, that helped Britain emerge as the dominant world power before the close of the 19th century.

One person who did not rejoice at Castlereagh’s death was Henry John Temple, the future British foreign secretary, better known as Lord Palmerston. “There could not have been a greater loss to the Government,” Palmerston declared, “and few greater to the country.” Palmerston himself would soon join the battle against the U.K.’s radical intellectuals, who in the early 1820s demanded that Britain go to war to help democracy take root in Spain, even though no vital British interest had been threatened—and even though this same intellectual class had at times shown only limited enthusiasm for the war against Napoleon, during which Britain’s very survival seemed at stake.

In a career spanning more than two decades in the Foreign Office, Palmerston was fated on occasion to be just as hated as Castlereagh. Like Castlereagh, Palmerston had only one immutable principle in foreign policy: British self-interest, synonymous with the preservation of the worldwide balance of power. But Palmerston also had clear liberal instincts. Because Britain’s was a constitutional government, he knew that the country’s self-interest lay in promoting constitutional governments abroad. He showed sympathy for the 1848 revolutions on the Continent, and consequently was beloved by the liberals. Still, Palmerston understood that his liberal internationalism, if one could call it that, was only a general principle—a principle that, given the variety of situations around the world, required constant bending. Thus, Palmerston encouraged liberalism in Germany in the 1830s but thwarted it there in the 1840s. He supported constitutionalism in Portugal, but opposed it in Serbia and Mexico. He supported any tribal chieftain who extended British India’s sphere of influence northwest into Afghanistan, toward Russia, and opposed any who extended Russia’s sphere of influence southeast, toward India—even as he cooperated with Russia in Persia.

Realizing that many people—and radicals in particular—tended to confuse foreign policy with their own private theology, Palmerston may have considered the moral condemnation that greeted him in some quarters as natural. (John Bright, the Liberal statesman, would later describe Palmerston’s tenure as “one long crime.”)

Yet without his flexible approach to the world, Palmerston could never have navigated the shoals of one foreign-policy crisis after another, helping Britain—despite the catastrophe of the Indian Mutiny in 1857—manage the transition from its ad hoc imperialism of the first half of the 19th century to the formal, steam-driven empire built on science and trade of the second half.

Decades passed before Palmerston’s accomplishments as arguably Britain’s greatest diplomat became fully apparent. In his own day, Palmerston labored hard to preserve the status quo, even as he sincerely desired a better world. “He wanted to prevent any power from becoming so strong that it might threaten Britain,” one of his biographers, Jasper Ridley, wrote. “To prevent the outbreak of major wars in which Britain might be involved and weakened,” Palmerston’s foreign policy “was therefore a series of tactical improvisations, which he carried out with great skill.”

Like Palmerston, Henry Kissinger believes that in difficult, uncertain times—times like the 1960s and ’70s in America, when the nation’s vulnerabilities appeared to outweigh its opportunities—the preservation of the status quo should constitute the highest morality. Other, luckier political leaders might later discover opportunities to encourage liberalism where before there had been none. The trick is to maintain one’s power undiminished until that moment.

Ensuring a nation’s survival sometimes leaves tragically little room for private morality. Discovering the inapplicability of Judeo-Christian morality in certain circumstances involving affairs of state can be searing. The rare individuals who have recognized the necessity of violating such morality, acted accordingly, and taken responsibility for their actions are among the most necessary leaders for their countries, even as they have caused great unease among generations of well-meaning intellectuals who, free of the burden of real-world bureaucratic responsibility, make choices in the abstract and treat morality as an inflexible absolute.

Fernando Pessoa, the early-20th-century Portuguese poet and existentialist writer, observed that if the strategist “thought of the darkness he cast on a thousand homes and the pain he caused in three thousand hearts,” he would be “unable to act,” and then there would be no one to save civilization from its enemies. Because many artists and intellectuals cannot accept this horrible but necessary truth, their work, Pessoa said, “serves as an outlet for the sensitivity [that] action had to leave behind.” That is ultimately why Henry Kissinger is despised in some quarters, much as Castlereagh and Palmerston were.

To be uncomfortable with Kissinger is, as Palmerston might say, only natural. But to condemn him outright verges on sanctimony, if not delusion. Kissinger has, in fact, been quite moral—provided, of course, that you accept the Cold War assumptions of the age in which he operated.

Because of the triumphalist manner in which the Cold War suddenly and unexpectedly ended, many have since viewed the West’s victory as a foregone conclusion, and therefore have tended to see the tough measures that Kissinger and others occasionally took as unwarranted. But for those in the midst of fighting the Cold War—who worked in the national-security apparatus during the long, dreary decades when nuclear confrontation seemed abundantly possible—its end was hardly foreseeable.

People forget what Eastern Europe was like during the Cold War, especially prior to the 1980s: the combination of secret-police terror and regime-induced poverty gave the impression of a vast, dimly lit prison yard. What kept that prison yard from expanding was mainly the projection of American power, in the form of military divisions armed with nuclear weapons. That such weapons were never used did not mean they were unnecessary. Quite the opposite, in fact: the men who planned Armageddon, far from being the Dr. Strangeloves satirized by Hollywood, were precisely the people who kept the peace.Many Baby Boomers, who lived through the Cold War but who have no personal memory of World War II, artificially separate these two conflicts. But for Kissinger, a Holocaust refugee and U.S. Army intelligence officer in occupied Germany; for General Creighton Abrams, a tank commander under George Patton in World War II and the commander of American forces in Vietnam from 1968 onward; and for General Maxwell Taylor, who parachuted into Nazi-occupied France and was later the U.S. ambassador to South Vietnam, the Cold War was a continuation of the Second World War.

Beyond Eastern Europe, revolutionary nihilists were attempting to make more Cubas in Latin America, while a Communist regime in China killed at least 20 million of its own citizens through the collectivization program known as the Great Leap Forward. Meanwhile, the North Vietnamese Communists—as ruthless a group of people as the 20th century produced—murdered perhaps tens of thousands of their own citizens before the first American troops arrived in Vietnam. People forget that it was, in part, an idealistic sense of mission that helped draw us into that conflict—the same well of idealism that helped us fight World War II and that motivated our interventions in the Balkans in the 1990s. Those who fervently supported intervention in Rwanda and the former Yugoslavia yet fail to comprehend the similar logic that led us into Vietnam are bereft of historical memory.

In Vietnam, America’s idealism collided head-on with the military limitations imposed by a difficult geography. This destroyed the political consensus in the United States about how the Cold War should be waged. Reviewing Kissinger’s book Ending the Vietnam War (2003), the historian and journalist Evan Thomas implied that the essence of Kissinger’s tragedy was that he was perennially trying to gain membership in a club that no longer existed. That club was “the Establishment,” a term that began to go out of fashion during the nation’s Vietnam trauma. The Establishment comprised all the great and prestigious personages of business and foreign policy—all male, all Protestant, men like John J. McCloy and Charles Bohlen—whose influence and pragmatism bridged the gap between the Republican and Democratic Parties at a time when Communism was the enemy, just as Fascism had recently been. Kissinger, a Jew who had escaped the Holocaust, was perhaps the club’s most brilliant protégé. His fate was to step into the vortex of foreign policy just as the Establishment was breaking up over how to extricate the country from a war that the Establishment itself had helped lead the country into.

Kissinger became President Richard Nixon’s national-security adviser in January of 1969, and his secretary of state in 1973. As a Harvard professor and “Rockefeller Republican,” Kissinger was distrusted by the anti-intellectual Republican right wing. (Meanwhile, the Democratic Party was slipping into the de facto quasi-isolationism that would soon be associated with George McGovern’s “Come Home, America” slogan.) Nixon and Kissinger inherited from President Lyndon Johnson a situation in which almost 550,000 American troops, as well as their South Vietnamese allies (at least 1 million soldiers all told), were fighting a similar number of North Vietnamese troops and guerrillas. On the home front, demonstrators—drawn in large part from the nation’s economic and educational elite—were demanding that the United States withdraw all its troops virtually immediately.

Some prominent American protesters even visited North Vietnam to publicly express solidarity with the enemy. The Communists, in turn, seduced foreign supporters with soothing assurances of Hanoi’s willingness to compromise. When Charles de Gaulle was negotiating a withdrawal of French troops from Algeria in the late 1950s and early 1960s (as Kissinger records in Ending the Vietnam War), the Algerians knew that if they did not strike a deal with him, his replacement would certainly be more hard-line. But the North Vietnamese probably figured the opposite—that because of the rise of McGovernism in the Democratic Party, Nixon and Kissinger were all that stood in the way of American surrender. Thus, Nixon and Kissinger’s negotiating position was infinitely more difficult than de Gaulle’s had been.

Kissinger found himself caught between liberals who essentially wanted to capitulate rather than negotiate, and conservatives ambivalent about the war who believed that serious negotiations with China and the Soviet Union were tantamount to selling out. Both positions were fantasies that only those out of power could indulge.

Further complicating Kissinger’s problem was the paramount assumption of the age—that the Cold War would have no end, and therefore regimes like those in China and the Soviet Union would have to be dealt with indefinitely. Hitler, a fiery revolutionary, had expended himself after 12 bloody years. But Mao Zedong and Leonid Brezhnev oversaw dull, plodding machines of repression that were in power for decades—a quarter century in Mao’s case, and more than half a century in Brezhnev’s. Neither regime showed any sign of collapse. Treating Communist China and the Soviet Union as legitimate states, even while Kissinger played China off against the Soviet Union and negotiated nuclear-arms agreements with the latter, did not constitute a sellout, as some conservatives alleged. It was, rather, a recognition of America’s “eternal and perpetual interests,” to quote Palmerston, refitted to an age threatened by thermonuclear war.

In the face of liberal capitulation, a conservative flight from reality, and North Vietnam’s relentlessness, Kissinger’s task was to withdraw from the region in a way that did not betray America’s South Vietnamese allies. In doing so, he sought to preserve America’s powerful reputation, which was crucial for dealing with China and the Soviet Union, as well as the nations of the Middle East and Latin America. Sir Michael Howard, the eminent British war historian, notes that the balance-of-power ethos to which Kissinger subscribes represents the middle ground between “optimistic American ecumenicism” (the basis for many global-disarmament movements) and the “war culture” of the American Wild West (in recent times associated with President George W. Bush). This ethos was never cynical or amoral, as the post–Cold War generation has tended to assert. Rather, it evinced a timeless and enlightened principle of statesmanship.

Kissinger confers with President Lyndon Johnson not long after being appointed to Richard Nixon’s national-security team. December 5, 1968. (Associated Press)

Kissinger confers with President Lyndon Johnson not long after being appointed to Richard Nixon’s national-security team. December 5, 1968 (Associated Press)

Within two years, Nixon and Kissinger reduced the number of American troops in Vietnam to 156,800; the last ground ­combat forces left three and a half years after Nixon took office. It had taken Charles de Gaulle longer than that to end France’s involvement in Algeria. (Frustration over the failure to withdraw even quicker rests on two difficult assumptions: that the impossibility of preserving South Vietnam in any form was accepted in 1969, and that the North Vietnamese had always been negotiating in good faith. Still, the continuation of the war past 1969 will forever be Nixon’s and Kissinger’s original sin.)

That successful troop withdrawal was facilitated by a bombing incursion into Cambodia—primarily into areas replete with North Vietnamese military redoubts and small civilian populations, over which the Cambodian government had little control. The bombing, called “secret” by the media, was public knowledge during 90 percent of the time it was carried out, wrote Samuel Huntington, the late Harvard professor who served on President Jimmy Carter’s National Security Council. The early secrecy, he noted, was to avoid embarrassing Cambodia’s Prince Norodom Sihanouk and complicating peace talks with the North Vietnamese.

The troop withdrawals were also facilitated by aerial bombardments of North Vietnam. Victor Davis Hanson, the neoconservative historian, writes that, “far from being ineffective and indiscriminate,” as many critics of the Nixon­-Kissinger war effort later claimed, the Christmas bombings of December 1972 in particular “brought the communists back to the peace table through its destruction of just a few key installations.” Hanson may be a neoconservative, but his view is hardly a radical reinterpretation of history; in fact, he is simply reading the news accounts of the era. Soon after the Christmas bombings, Malcolm W. Browne of The New York Times found the damage to have been “grossly overstated by North Vietnamese propaganda.” Peter Ward, a reporter for The Baltimore Sun, wrote, “Evidence on the ground disproves charges of indiscriminate bombing. Several bomb loads obviously went astray into civilian residential areas, but damage there is minor, compared to the total destruction of selected targets.”

The ritualistic vehemence with which many have condemned the bombings of North Vietnam, the incursion into Cambodia, and other events betrays, in certain cases, an ignorance of the facts and of the context that informed America’s difficult decisions during Vietnam.

The troop withdrawals that Nixon and Kissinger engineered, while faster than de Gaulle’s had been from Algeria, were gradual enough to prevent complete American humiliation. This preservation of America’s global standing enabled the president and the secretary of state to manage a historic reconciliation with China, which helped provide the requisite leverage for a landmark strategic arms pact with the Soviet Union—even as, in 1970, Nixon and Kissinger’s threats to Moscow helped stop Syrian tanks from crossing farther into Jordan and toppling King Hussein. At a time when defeatism reigned, Kissinger improvised in a way that would have impressed Palmerston.

Yes, Kissinger’s record is marked by nasty tactical miscalculations—mistakes that have spawned whole libraries of books. But the notion that the Nixon administration might have withdrawn more than 500,000 American troops from Vietnam within a few months in 1969 is problematic, especially when one considers the complexities that smaller and more gradual withdrawals in Bosnia, Iraq, and Afghanistan later imposed on military planners. (And that’s leaving aside the diplomatic and strategic fallout beyond Southeast Asia that America’s sudden and complete betrayal of a longtime ally would have generated.)

Despite the North Vietnamese invasion of eastern Cambodia in 1970, the U.S. Congress substantially cut aid between 1971 and 1974 to the Lon Nol regime, which had replaced Prince Sihanouk’s, and also barred the U.S. Air Force from helping Lon Nol fight against the Khmer Rouge. Future historians will consider those actions more instrumental in the 1975 Khmer Rouge takeover of Cambodia than Nixon’s bombing of sparsely populated regions of Cambodia six years earlier.

When Saigon fell to the Communists, in April of 1975, it was after a heavily Democratic Congress drastically cut aid to the South Vietnamese. The regime might not have survived even if Congress had not cut aid so severely. But that cutoff, one should recall, was not merely a statement about South Vietnam’s hopelessness; it was a consequence of Watergate, in which Nixon eviscerated his own influence in the capital, and seriously undermined Gerald Ford’s incoming administration. Kissinger’s own words in Ending the Vietnam War deserve to echo through the ages:

None of us could imagine that a collapse of presidential authority would follow the expected sweeping electoral victory [of Nixon in 1972]. We were convinced that we were working on an agreement that could be sustained by our South Vietnamese allies with American help against an all-­out invasion. Protesters could speak of Vietnam in terms of the excesses of an aberrant society, but when my colleagues and I thought of Vietnam, it was in terms of dedicated men and women—soldiers and Foreign Service officers—who had struggled and suffered there and of our Vietnamese associates now condemned to face an uncertain but surely painful fate. These Americans had honestly believed that they were defending the cause of freedom against a brutal enemy in treacherous jungles and distant rice paddies. Vilified by the media, assailed in Congress, and ridiculed by the protest movement, they had sustained America’s idealistic tradition, risking their lives and expending their youth on a struggle that American leadership groups had initiated, then abandoned, and finally disdained.

Kissinger’s diplomatic achievements reached far beyond Southeast Asia. Between 1973 and 1975, Kissinger, serving Nixon and then Gerald Ford, steered the Yom Kippur War toward a stalemate that was convenient for American interests, and then brokered agreements between Israel and its Arab adversaries for a separation of forces. Those deals allowed Washington to reestablish diplomatic relations with Egypt and Syria for the first time since their rupture following the Six ­Day War in 1967. The agreements also established the context for the Egyptian-­Israeli peace treaty of 1979, and helped stabilize a modus vivendi between Israel and Syria that has lasted well past the turn of the 21st century.

In the fall of 1973, with Chile dissolving into chaos and open to the Soviet bloc’s infiltration as a result of Salvador Allende’s anarchic and incompetent rule, Nixon and Kissinger encouraged a military coup led by General Augusto Pinochet, during which thousands of innocent people were killed. Their cold moral logic was that a right-wing regime of any kind would ultimately be better for Chile and for Latin America than a leftist regime of any kind—and would also be in the best interests of the United States. They were right—though at a perhaps intolerable cost.

While much of the rest of Latin America dithered with socialist experiments, in the first seven years of Pinochet’s regime, the number of state companies in Chile went from 500 to 25—a shift that helped lead to the creation of more than 1 million jobs and the reduction of the poverty rate from roughly one-­third of the population to as low as one-tenth. The infant mortality rate also shrank, from 78 deaths per 1,000 births to 18. The Chilean social and economic miracle has become a paradigm throughout the developing world, and in the ex­-Communist world in particular. Still, no amount of economic and social gain justifies almost two decades of systematic torture perpetrated against tens of thousands of victims in more than 1,000 detention centers.

But real history is not the trumpeting of ugly facts untempered by historical and philosophical context—the stuff of much investigative journalism. Real history is built on constant comparison with other epochs and other parts of the world. It is particularly useful, therefore, to compare the records of the Ford and Carter administrations in the Horn of Africa, and especially in Ethiopia—a country that in the 1970s was more than three times as populous as Pinochet’s Chile.

In his later years, Kissinger has not been able to travel to a number of countries where legal threats regarding his actions in the 1970s in Latin America hang over his head. Yet in those same countries, Jimmy Carter is regarded almost as a saint. Let’s consider how Carter’s morality stacks up against Kissinger’s in the case of Ethiopia, which, like Angola, Nicaragua, and Afghanistan, was among the dominoes that became increasingly unstable and then fell in the months and years following Saigon’s collapse, partly disproving another myth of the Vietnam antiwar protest movement—that the domino theory was wrong.

As I’ve written elsewhere, including in my 1988 book, Surrender or Starve, the left-leaning Ethiopian Dergue and its ascetic, pitiless new leader, Mengistu Haile Mariam, had risen to power while the U.S. was preoccupied with Watergate and the fall of South Vietnam. Kissinger, now President Ford’s secretary of state, tried to retain influence in Ethiopia by continuing to provide some military assistance to Addis Ababa. Had the United States given up all its leverage in Ethiopia, the country might have moved to the next stage and become a Soviet satellite, with disastrous human-­rights consequences for its entire population.

Ford and Kissinger were replaced in January of 1977 by Jimmy Carter and his secretary of state, Cyrus Vance, who wanted a policy that was both more attuned to and less heavy-handed toward sub-Saharan Africa. In the Horn of Africa, this translated immediately into a Cold War disadvantage for America, because the Soviets—spurred on by the fall of South Vietnam—were becoming more belligerent, and more willing to expend resources, than ever.

With Ethiopia torn apart by revolutionary turmoil, the Soviets used their Somali clients as a lever against Addis Ababa. Somalia then was a country of only 3 million nomads, but Ethiopia had an urbanized population 10 times that size: excellent provender for the mechanized African satellite that became Leonid Brezhnev’s supreme objective. The Soviets, while threatening Ethiopia by supplying its rival with weapons, were also offering it military aid—the classic carrot-­and-­stick strategy. Yet partly because of the M-­60 tanks and F­-5 warplanes that Mengistu was still—largely thanks to Kissinger—receiving from the United States, the Ethiopian leader was hesitant about undertaking the disruptive task of switching munitions suppliers for an entire army.

In the spring of 1977, Carter cut off arms deliveries to Ethiopia because of its human-rights record. The Soviets dispatched East German security police to Addis Ababa to help Mengistu consolidate his regime, and invited the Ethiopian ruler to Moscow for a week-long state visit. Then Cuban advisers visited Ethiopia, even while tanks and other equipment arrived from pro-Soviet South Yemen. In the following months, with the help of the East Germans, the Dergue gunned down hundreds of Ethiopian teenagers in the streets in what came to be known as the “Red Terror.”

Still, all was not lost—at least not yet. The Ethiopian Revolution, leftist as it was, showed relatively few overt signs of anti-­Americanism. Israel’s new prime minister, Menachem Begin, in an attempt to save Ethiopian Jews, beseeched Carter not to close the door completely on Ethiopia and to give Mengistu some military assistance against the Somali advance.

But Begin’s plea went unheeded. The partial result of Carter’s in­ action was that Ethiopia went from being yet another left-leaning regime to a full-­fledged Marxist state, in which hundreds of thousands of people died in collectivization and “villagization” schemes—to say nothing of the hundreds of thousands who died in famines that were as much a consequence of made-­in-­Moscow agricultural policies as they were of drought.

Ethiopians should have been so lucky as to have had a Pinochet.

The link between Carter’s decision not to play Kissingerian power politics in the Horn of Africa and the mass deaths that followed in Ethiopia is more direct than the link between Nixon’s incursion into a rural area of Cambodia and the Khmer Rouge takeover six years later.

In the late 19th century, Lord Palmerston was still a controversial figure. By the 20th, he was considered by many to have been one of Britain’s greatest foreign ministers. Kissinger’s reputation will follow a similar path. Of all the memoirs written by former American secretaries of state and national­-security advisers during the past few decades, his are certainly the most vast and the most intellectually stimulating, revealing the elaborate historical and philosophical milieu that surround difficult foreign-­policy decisions. Kissinger will have the final say precisely because he writes so much better for a general audience than do most of his critics. Mere exposé often has a shorter shelf life than the work of a statesman aware of his own tragic circumstances and able to connect them to a larger pattern of events. A colleague of mine with experience in government once noted that, as a European-­style realist, Kissinger has thought more about morality and ethics than most self­-styled moralists. Realism is about the ultimate moral ambition in foreign policy: the avoidance of war through a favorable balance of power.

Aside from the successful interventions in the Balkans, the greatest humanitarian gesture in my own lifetime was President Richard Nixon’s trip to the People’s Republic of China in 1972, engineered by Kissinger. By dropping the notion that Taiwan was the real China, by giving China protection against the Soviet Union, and by providing assurances against an economically resurgent Japan, the two men helped place China in a position to devote itself to peaceful economic development; China’s economic rise, facilitated by Deng Xiaoping, would lift much of Asia out of poverty. And as more than 1 billion people in the Far East saw a dramatic improvement in living standards, personal freedom effloresced.

Pundits chastised Kissinger for saying, in 1973, that Jewish emi­ gration from the Soviet Union was “not an American concern.” But as J. J. Goldberg of The Jewish Daily Forward was careful to note (even while being very critical of Kissinger’s cynicism on the subject), “Emigration rose dramatically under Kissinger’s detente policy”— but “plummeted” after the 1974 passage of the Jackson­-Vanik amendment, which made an open emigration policy a precondition for normal U.S.­Soviet trade relations; aggrieved that the Americans would presume to dictate their emigration policies, the Soviets began authorizing fewer exit visas. In other words, Kissinger’s realism was more effective than the humanitarianism of Jewish groups in addressing a human­-rights concern.

Kissinger is a Jewish intellectual who recognizes a singular unappealing truth: that the Republican Party, its strains of anti-Semitism in certain periods notwithstanding, was better able to protect America than the Democratic Party of his era, because the Republicans better understood and, in fact, relished the projection of American power at a juncture in the Cold War when the Democrats were undermined by defeatism and quasi-­isolationism. (That Kissinger-­style realism is now more popular in Barack Obama’s White House than among the GOP indicates how far today’s Republicans have drifted from their core values.)

But unlike his fellow Republicans of the Cold War era—dull and practical men of business, blissfully unaware of what the prestigious intellectual journals of opinion had to say about them—Kissinger has always been painfully conscious of the de­ gree to which he is loathed. He made life-­and-death decisions that affected millions, entailing many messy moral compromises. Had it not been for the tough decisions Nixon, Ford, and Kissinger made, the United States might not have withstood the damage caused by Carter’s bouts of moralistic ineptitude; nor would Ronald Reagan have had the luxury of his successfully executed Wilsonianism. Henry Kissinger’s classical realism—as expressed in both his books and his statecraft—is emotionally unsatisfying but analytically timeless. The degree to which Republicans can recover his sensibility in foreign policy will help determine their own prospects for regaining power.

This article available online at:

http://www.theatlantic.com/magazine/archive/2013/05/the-statesman/309283/

Human Events – Consistent Analysis of Islam?

Human Events – Consistent Analysis of Islam?

WHY AREN’T LIBERALS MORE CRITICAL OF ISLAM?

Benjamin Wiker

Why aren't liberals more critical of Islam?

4/25/2013 10:09 AM

We are now—like it or not—immersed in a real debate about the nature of Islam. The background of deceased Boston bomber Tamerlan Tsarnaev is forcing us into it. There is no doubt Tamerlan, the elder brother of the two perpetrators, was transformed by his relatively recent embrace of radical Islam.

And so, we have the very difficult question facing us in regard to Islam: Is the propensity to terror and jihad radical in the deepest sense of word’s origin in Latin, radix, “root”? Is there something at the root of the Quran itself and the essential history of Islam that all too frequently creates the Tsarnaev brothers, Al-Qaeda, Osama bin Laden, Mohamed Atta, the Muslim Brotherhood, and Hamas, or is there some other source, quiet accidental to Islam?

That question must be taken seriously, very seriously.

I am not going to answer that question, but rather pose another: Why do liberals have so much difficulty even allowing that very serious question to be raised?

The answer to this second question is important for the obvious reason that, if liberals won’t allow the first question to be asked, then it surely can’t be answered.

A lot hangs on not answering it, in pretending it is not a legitimate question to raise. If Islam has a significant tendency to breed domestic Islamism—not everywhere, not in every case, but in a significant number of cases—then the current administration’s obsession with, say, Tea Party terror cells is woefully misplaced.

So what is it about liberalism that makes it so difficult for it to take a clear, critical look at Islam, even while liberals have no problem excoriating Christians for every imaginable historical evil?

I believe I can give at least a partial answer, if we take a big step back from the present scene and view the history of Western liberalism on a larger scale.

Liberalism is an essentially secular movement that began within Christian culture. (InWorshipping the State, I trace it all the way back to Machiavelli in the early 1500s.) Note the two italicized aspects: secular and within.

As secular, liberalism understood itself as embracing this world as the highest good, advocating a self-conscious return to ancient pagan this-worldliness. But this embrace took place within a Christianized culture. Consequently liberalism tended to define itself directly against that which it was (in its own particular historical context) rejecting.

Modern liberalism thereby developed with a deep antagonism toward Christianity, rather than religion in general. It was culturally powerful Christianity that stood in the way of liberal secular progress in the West—not Islam, Buddhism, Hinduism, Shintoism, Druidism, etc.

And so, radical Enlightenment thinkers like Voltaire rallied his fellow secular soldiers with what would become the battle cry of the eighteenth-century Enlightenment: écrasez l’infâme, “destroy the infamous thing.” It was a cry directed, not against religion in general, but (as historian Peter Gay rightly notes) “against Christianity itself, against Christian dogma in all its forms, Christian institutions, Christian ethics, and the Christian view of man.”

Liberals therefore tended to approve of anything but Christianity. Deism was fine, or even pantheism. The eminent liberal Rousseau praised Islam and declared Christianity incompatible with good government. Hinduism and Buddhism were exotic and tantalizing among the edge-cutting intelligentsia of the 19th century. Christianity, by contrast, was the religion against which actual liberal progress had to be made.

So, other religions were whitewashed even while Christianity was continually tarred. The tarring was part of the liberal strategy aimed at unseating Christianity from its privileged cultural-legal-moral position in the West. The whitewashing of other religions was part of the strategy too, since elevating them helped deflate the privileged status of Christianity.

And so, for liberalism, nothing could be as bad as Christianity. If something goes wrong, blame Christianity first and all of Western culture that is based upon it.

This view remains integral to liberalism today, and it affects how liberals treat Islam.

That’s why liberals are disposed to interpret the Crusades as the result of Christian aggression, rather than, as it actually was, a response to Islamic aggression. That’s why Christian organizations are regularly maltreated on our liberal college campuses while Islamic student organizations and needs are graciously met. And the liberal media—ever wonder why you didn’t hear last February of the imam of the Arlington, VA mosque calling for Muslims to wage war against the enemies of Allah? Nor should we wonder why, for liberals, contemporary jihadist movements in Islam must be seen as justified reactions to Western policies—chickens coming home to roost. Or when a bomb goes off, that’s why a liberal must hope that it was perpetrated by some fundamentalist patriotic Christian group.

What liberals do not want to do is take a deep, critical look at Islam. To do so just might question some of their most basic assumptions.

Author and speaker Benjamin Wiker, Ph.D. has published eleven books, his newest being Worshipping the State: How Liberalism Became Our State Religion. His website is www.benjaminwiker.com