See Disclaimer Below.

Archive for October, 2013|Monthly archive page

Book Synopsis: Miller, William Lee. Arguing About Slavery: The Great Battle in the United States Congress. New York: Alfred A. Knopf, 1996.

In America, American History, Arts & Letters, Book Reviews, Books, Historicism, History, Humanities, Law, Nineteenth-Century America, Politics, Scholarship, Slavery, Southern History, The South on October 30, 2013 at 8:45 am

Allen 2

This is the story of America’s struggle to end slavery without destroying the union.  The book deliberately focuses on the rhetoric of white male politicians and thus does not purport to tell the “whole” story, but only that part of the story which is most recoverable and hence most knowable.  Many early 19th century politicians averred that the Northern textile industry, which was roughly as powerful as today’s oil industry, depended on Southern slavery.  An industry with such power and control over the financial interests of the country can, Miller argues, cause social changes to come about more slowly.  When talking about slavery, Miller submits, American politicians of the time had to deal with inherent contradictions in the American tradition: a nation that celebrated equality and the virtues of the “common man” had to come to terms with the fact that African slaves, officially excluded from citizenry, embodied the “common man” ideal but were not permitted to climb the social and economic ladder.  Most politicians did not believe slavery could end abruptly but would end gradually as economic dependence turned elsewhere.  Slavery went against all the principles and rhetoric of America’s founding documents, and yet there it was, a thriving and ubiquitous industry.

The book begins in 1835, when Congress deliberated over petitions to abolish slavery in the District of Columbia.  Congress took on these petitions reluctantly, unwilling to address a contentious and divisive issue that would disrupt congressional and governmental harmony.  Congress wished the issue would just go away—but realized that it could not.  During this congressional session, most of the speechmaking came from proslavery Southerners, since Northern politicians were, generally, too afraid to take a stand one way or the other.

Major figures from this session include the following:

President Andrew Jackson

John Fairfield: Congressman from Vermont who introduces the petitions to abolish slavery in D.C.

Franklin Pierce: Eventually the fourteenth President, he is, at this time, serving in the U.S. House of Representatives.  He is a Northerner with Southern sympathies.

James Henry Hammond: Congressman from South Carolina who opposed Fairfield and Adams.

John Quincy Adams: A former president (the nation’s sixth), he is, at this time, a U.S. Representative from Massachusetts.

Henry Laurens Pinckney: A Congressman from South Carolina who opposed Fairfield and Adams but who also did not get along with John C. Calhoun.

John C. Calhoun: A U.S. Senator from South Carolina, having resigned from the Vice Presidency.

Martin Van Buren: Eventually a U.S. President (the nation’s eighth), he is, at this time, the Vice President under Andrew Jackson.

James K. Polk: Eventually a U.S. President (the nation’s eleventh), he is, at this time, a member of the U.S. House from the State of Tennessee.

The debates in Congress were fueled by abolitionist literature (written by people like John Greenleaf Whittier, William Lloyd Garrison, and Elizur Wright, Jr.) and oration that maintained not only that slavery was wrong (as people had maintained for decades) but also that its demise was the nation’s highest priority.  Congress could not “sit on its hands” while abolitionists protested and demanded change; it had to respond, albeit reluctantly, to an institution that many congressmen assumed was already doomed.  The demise of slavery was supposed to be inevitable, according to the common logic, yet it persisted; therefore, the abolitionists forced Congress to address slavery, the demise of which, the abolitionists argued, was not as inevitable as people supposed.

The Senate also faced petitions.  Senator Calhoun became the most colorful and powerful figure opposing these positions.  Calhoun and his followers often employed “liberal” rhetoric on the Senate floor.  Henry Laurens Pinckney authored the gag rule, which was an attempt to stop citizens from submitting antislavery petitions.  (Calhoun despised Pinckney so much that he endorsed unionist candidates to take over Pinckney’s Congressional seat.)  The gag rule was adopted by a 117-68 vote, thus suggesting that the nation was more united on the issue of slavery than popular thought maintains.  The gag rule required congressmen to set aside slavery petitions immediately, without so much as printing them.  John Quincy Adams would spend the following years in Congress battling the so-called gag rule.

At this point in the book, Adams becomes the central figure.  Adams, then a distinguished ex-president, was in his 60s and 70s as he fought against the gag order.  He maintained that not only abolitionists but also slaves could petition.  Miller argues that this position shows the extent to which Adams was willing to risk his reputation and what was left of his career in order to stand up to the Southern gag order.  Other congresspersons were slow to join Adams in his fight.  During these debates, very little was said of African Americans, and most of the debates focused on the rights and roles of government and ignored the human persons that that government was supposed to serve and protect.

After Martin Van Buren became president, succeeding Andrew Jackson, he announced that he would veto any bill involving the issue of slavery in D.C. or the slave states.  Nevertheless, the petitions continued to pour in.  Adams himself began submitting petitions.  The gag resolutions had to be passed each session, but a gag rule was announced in 1840 that, in essence, made the “gagging” permanent.  Adams led the effort to rescind this rule.  He grew closer and closer to the abolitionists as he precipitated disarray in the House.  He also made several speeches despite threats against his life.  Adams’s opponents tried to get the entire House to censure him, but they failed.  Adams used the censure trials as an occasion to bring slavery to the forefront of Congressional debate.  In 1844, Adams succeeded in having the gag rule abolished.

John William Corrington, A Literary Conservative

In American History, Arts & Letters, Conservatism, Creative Writing, Essays, Fiction, History, Humanities, John William Corrington, Joyce Corrington, Law, Literary Theory & Criticism, Literature, Modernism, Southern History, Southern Literature, Television, Television Writing, The Novel, The South, Western Philosophy, Writing on October 23, 2013 at 8:45 am

 

Allen 2

 

An earlier version of this essay appeared here at Fronch Porch Republic.

Remember the printed prose is always

half a lie: that fleas plagued patriots,

that greatness is an afterthought

affixed by gracious victors to their kin.

 

—John William Corrington

 

It was the spring of 2009.  I was in a class called Lawyers & Literature.  My professor, Jim Elkins, a short-thin man with long-white hair, gained the podium.  Wearing what might be called a suit—with Elkins one never could tell—he recited lines from a novella, Decoration Day.  I had heard of the author, John William Corrington, but only in passing.

“Paneled walnut and thick carpets,” Elkins beamed, gesturing toward the blank-white wall behind him, “row after row of uniform tan volumes containing between their buckram covers a serial dumb show of human folly and greed and cruelty.”  The students, uncomfortable, began to look at each other, registering doubt.  In law school, professors didn’t wax poetic.  But this Elkins—he was different.  With swelling confidence, he pressed on: “The Federal Reporter, Federal Supplement, Supreme Court Reports.  Two hundred years of our collective disagreements and wranglings from Jay and Marshall through Taney and Holmes and Black and Frankfurter—the pathetic often ill-conceived attempts to resolve what we have done to one another.”

Elkins paused.  The room went still.  Awkwardly profound, or else profoundly awkward, the silence was like an uninvited guest at a dinner party—intrusive, unexpected, and there, all too there.  No one knew how to respond.  Law students, most of them, can rattle off fact-patterns or black-letter-law whenever they’re called on.  But this?  What were we to do with this?

What I did was find out more about John Willliam Corrington.  Having studied literature for two years in graduate school, I was surprised to hear this name—Corrington—in law school.  I booted up my laptop, right where I was sitting, and, thanks to Google, found a few biographical sketches of this man, who, it turned out, was perplexing, riddled with contradictions: a Southerner from the North, a philosopher in cowboy boots, a conservative literature professor, a lawyer poet.  This introduction to Corrington led to more books, more articles, more research.  Before long, I’d spent over $300 on Amazon.com.  And I’m not done yet.

***

Born in Cleveland, Ohio, on October 28, 1932, Corrington—or Bill, as his friends and family called him—passed as a born-and-bred Southerner all of his life.  As well he might, for he lived most of his life below the Mason-Dixon line, and his parents were from Memphis and had moved north for work during the Depression.  He moved to the South (to Shreveport, Louisiana) at the age of 10, although his academic CV put out that he was, like his parents, born in Memphis, Tennessee.  Raised Catholic, he attended a Jesuit high school in Louisiana but was expelled for “having the wrong attitude.”  The Jesuit influence, however, would remain with him always.  At the beginning of his books, he wrote, “AMDG,” which stands for Ad Majorem Dei Gloriam—“for the greater glory of God.”  “It’s just something that I was taught when I was just learning to write,” he explained in an interview in 1985, “taught by the Jesuits to put at the head of all my papers.”

Bill was, like the late Mark Royden Winchell, a Copperhead at heart, and during his career he authored or edited, or in some cases co-edited, twenty books of varying genres.  He earned a B.A. from Centenary College and M.A. in Renaissance literature from Rice University, where he met his wife, Joyce, whom he married on February 6, 1960.  In September of that year, he and Joyce moved to Baton Rouge, where Bill became an instructor in the Department of English at Louisiana State University (LSU).  At that time, LSU’s English department was known above all for The Southern Review (TSR), the brainchild of Cleanth Brooks and Robert Penn Warren, but also for such literary luminaries as Robert Heilman, who would become Bill’s friend.

In the early 1960s, Bill pushed for TSR to feature fiction and poetry and not just literary criticism.  He butted heads with then-editors Donald E. Stanford and Lewis P. Simpson, who thought of the journal as scholarly, not creative, as if journals couldn’t be both scholarly and creative.  A year after joining the LSU faculty, Bill published his first book of poetry, Where We Are.  With only 18 poems and 225 first edition printings, the book hardly established Bill’s reputation as Southern man of letters.  But it invested his name with recognition and gave him confidence to complete his first novel, And Wait for the Night (1964).

Bill and Joyce spent the 1963-64 academic year in Sussex, England, where Bill took the D.Phil. from the University of Sussex in 1965.  In the summer of 1966, at a conference at Northwestern State College, Mel Bradford, that Dean of Southern Letters, pulled Bill aside and told him, enthusiastically, that And Wait for the Night (1964) shared some of the themes and approaches of William Faulkner’s The Unvanquished.  Bill agreed.  And happily.

***

Of Bill and Miller Williams, Bill’s colleague at LSU, Jo LeCoeur, poet and literature professor, once submitted, “Both men had run into a Northern bias against what was perceived as the culturally backward South.  While at LSU they fought back against this snub, editing two anthologies of Southern writing and lecturing on ‘The Dominance of Southern Writers.’  Controversial as a refutation of the anti-intellectual Southern stereotype, their joint lecture was so popular [that] the two took it on the road to area colleges.”

In this respect, Bill was something of a latter-day Southern Fugitive—a thinker in the tradition of Donald Davidson, Allan Tate, Andrew Nelson Lytle, and John Crowe Ransom.  Bill, too, took his stand.  And his feelings about the South were strong and passionate, as evidenced by his essay in The Southern Partisan, “Are Southerners Different?” (1984).  Bill’s feelings about the South, however, often seemed mixed.  “[T]he South was an enigma,” Bill wrote to poet Charles Bukowski, “a race of giants, individualists, deists, brainy and gutsy:  Washington, Jefferson, Madison, Jackson (Andy), Davis, Calhoun, Lee, and on and on.  And yet the stain of human slavery on them.”  As the epigraph (above) suggests, Bill was not interested in hagiographic renderings of Southern figures.  He was interested in the complexities of Southern people and experience.  In the end, though, there was no doubt where his allegiances lay.  “You strike me as the most unreconstructed of all the Southern novelists I know anything about,” said one interviewer to Bill.  “I consider that just about the greatest compliment anyone could give,” Bill responded.

While on tour with Williams, Bill declared, “We are told that the Southerner lives in the past.  He does not.  The past lives in him, and there is a difference.”  The Southerner, for Bill, “knows where he came from, and who his fathers were.”  The Southerner “knows still that he came from the soil, and that the soil and its people once had a name.”  The Southerner “knows that is true, and he knows it is a myth.”  And the Southerner “knows the soil belonged to the black hands that turned it as well as it ever could belong to any hand.”  In short, the Southerner knows that his history is tainted but that it retains virtues worth sustaining—that a fraught past is not reducible to sound bites or political abstractions but is vast and contains multitudes.

***

In 1966, Bill and Joyce moved to New Orleans, where the English Department at Loyola University, housed in a grand Victorian mansion on St. Charles Avenue, offered him a chairmanship.  Joyce earned the M.S. in chemistry from LSU that same year.  By this time, Bill had written four additional books of poetry, the last of which, Lines to the South and Other Poems (1965), benefited from Bukowski’s influence.  Bill’s poetry earned a few favorable reviews but not as much attention as his novels—And Wait for the Night (1964), The Upper Hand (1967), and The Bombardier (1970).  Writing in The Massachusetts Review, Beat poet and critic Josephine Miles approvingly noted two of Bill’s poems from Lines, “Lucifer Means Light” and “Algerien Reveur,” alongside poetry by James Dickey, but her comments were more in passing than in depth.  Dickey himself, it should be noted, admired Bill’s writing, saying, “A more forthright, bold, adventurous writer than John William Corrington would be very hard to find.”

Joyce earned her PhD in chemistry from Tulane in 1968.  Her thesis, which she wrote under the direction of L. C. Cusachs, was titled, “Effects of Neighboring Atoms in Molecular Orbital Theory.”  She began teaching chemistry at Xavier University, and her knowledge of the hard sciences brought about engaging conservations, between her and Bill, about the New Physics.  “Even though Bill only passed high school algebra,” Joyce would later say, “his grounding in Platonic idealism made him more capable of understanding the implications of quantum theory than many with more adequate educations.”

By the mid-70s, Bill had become fascinated by Eric Voeglin.  A German historian, philosopher, and émigré who had fled the Third Reich, Voegelin taught in LSU’s history department and lectured for the Hoover Institution at Stanford University, where he was a Salvatori Fellow.  Voeglin’s philosophy, which drew from Friedrich von Hayek and other conservative thinkers, inspired Bill.  In fact, Voegelin made such a lasting impression that, at the time of Bill’s death, Bill was working on an edition of Voegelin’s The Nature of the Law and Related Legal Writings.  (After Bill’s death, two men—Robert Anthony Pascal and James Lee Babin—finished what Bill had begun.  The completed edition appeared in 1991.)

By 1975, the year he earned his law degree from Tulane, Bill had penned three novels, a short story collection, two editions (anthologies), and four books of poetry.  But his writings earned little money.  He also had become increasingly disenchanted with the political correctness on campus:

By 1972, though I’d become chair of an English department and offered a full professorship, I’d had enough of academia. You may remember that in the late sixties and early seventies, the academic world was hysterically attempting to respond to student thugs who, in their wisdom, claimed that serious subjects seriously taught were “irrelevant.” The Ivy League gutted its curriculum, deans and faculty engaged in “teach-ins,” spouting Marxist-Leninist slogans, and sat quietly watching while half-witted draft-dodgers and degenerates of various sorts held them captive in their offices. Oddly enough, even as this was going on, there was a concerted effort to crush the academic freedom of almost anyone whose opinions differed from that of the mob or their college-administrator accessories. It seemed a good time to get out and leave the classroom to idiots who couldn’t learn and didn’t know better, and imbeciles who couldn’t teach and should have known better.

Bill joined the law firm of Plotkin & Bradley, a small personal injury practice in New Orleans, and continued to publish in such journals as The Sewanee Review and The Southern Review, and in such conservative periodicals as The Intercollegiate Review and Modern Age.  His stories took on a legal bent, peopled as they were with judges and attorneys.  But neither law nor legal fiction brought him fame or fortune.

So he turned to screenplays—and, at last, earned the profits he desired.  Viewers of the recent film I am Legend (2007), starring Will Smith, might be surprised to learn that Bill and Joyce wrote the screenplay for the earlier version, Omega Man (1971), starring Charlton Heston.  And viewers of Battle for the Planet of the Apes (1973) might be surprised to learn that Bill wrote the film’s screenplay while still a law student.  All told, Bill and Joyce wrote five screenplays and one television movie.  Free from the constraints of university bureaucracy, Bill collaborated with Joyce on various television daytime dramas, including Search for Tomorrow, Another World, Texas, Capitol, One Life to Live, Superior Court, and, most notably, General Hospital.  These ventures gained the favor of Hollywood stars, and Bill and Joyce eventually moved to Malibu.

Bill constantly molded and remolded his image, embracing Southern signifiers while altering their various expressions.  His early photos suggest a pensive, put-together gentleman wearing ties and sport coats and smoking pipes.  Later photos depict a rugged man clad in western wear.  Still later photos conjure up the likes of Roy Orbison, what with Bill’s greased hair, cigarettes, and dark sunglasses.

Whatever his looks, Bill was a stark, provocative, and profoundly sensitive writer.  His impressive oeuvre has yet to receive the critical attention it deserves.  That scholars of conservatism, to say nothing of scholars of Southern literature, have ignored this man is almost inconceivable.  There are no doubt many aspects of Bill’s life and literature left to be discovered.  As Bill’s friend William Mills put it, “I believe there is a critique of modernity throughout [Bill’s] writing that will continue to deserve serious attentiveness and response.”

On Thanksgiving Day, November 24, 1988, Bill suffered a heart attack and died.  He was 56.  His last words, echoing Stonewall Jackson, were, “it’s all right.”

 

Is Hacking the Future of Scholarship?

In Arts & Letters, Communication, Humanities, Information Design, Law, Legal Research & Writing, Scholarship, Writing on October 16, 2013 at 7:45 am

Allen 2

This article appeared here in Pacific Standard.

Most attorneys are familiar with e-discovery, a method for obtaining computer and electronic information during litigation. E-discovery has been around a long time. It has grown more complex and controversial, however, with the rise of new technologies and the growing awareness that just about anything you do online or with your devices can be made available to the public. Emails, search histories, voicemails, instant messages, text messages, call history, music playlists, private Facebook conversations (not just wall posts)—if relevant to a lawsuit, these and other latent evidence, for better or worse, can be exposed, even if you think they’ve been hidden or discarded.

Anyone who has conducted or been involved with e-discovery realizes how much personal, privileged, and confidential information is stored on our devices. When you “delete” files and documents from your computer, they do not go away. They remain embedded in the hard drive; they may become difficult to find, but they’re there. Odds are, someone can access them. Even encrypted files can be traced back to the very encryption keys that created them.

E-discovery has been used to uncover registries and cache data showing that murderers had been planning their crimes, spouses had been cheating, perverts had been downloading illegal images, and employees had been stealing or compromising sensitive company data or destroying intellectual property. Computer forensics were even used to reveal medical documents from Dr. Conrad Murray’s computer during the so-called “Michael Jackson death trial.”

Computer forensics can teach you a lot about a person: the websites he visits, the people he chats with, the rough drafts he abandons, the videos he watches, the advertisements he clicks, the magazines he reads, the news networks he prefers, the places he shops, the profiles he views, the songs he listens to, and so on. It is fair to say that given a laptop hard drive, a forensic expert could nearly piece together an individual’s personality and perhaps come to know more about that person—secret fetishes, guilty pleasures, and criminal activities—than his friends and family do.

In light of this potential access to people’s most private activities, one wonders how long it will be until academics turn to computer forensics for research purposes. This is already being done in scientific and technology fields, which is not surprising because the subject matter is the machine and not the human, but imagine what it would mean for the humanities? If Jefferson had used a computer, perhaps we would know the details of his relationship with Sally Hemings. If we could get ahold of Shakespeare’s iPad, we could learn whether he wrote all those plays by himself. By analyzing da Vinci’s browsing history, we might know which images he studied and which people he interacted with before and during his work on the Mona Lisa—and thus might discover her identity.

There are, of course, government safeguards in place to prevent the abuse of, and unauthorized access to, computer and electronic data: the Wiretap Act, the Pen Registers and Trap and Trace Devices Statute, and the Stored Wired and Electronic Communication Act come to mind. Not just anyone can access everything on another person’s computer, at least not without some form of authorization. But what if researchers could obtain authorization to mine computer and electronic data for the personal and sensitive information of historical figures? What if computer forensics could be used in controlled settings and with the consent of the individual whose electronic data are being analyzed?

Consent, to me, is crucial: It is not controversial to turn up information on a person if he voluntarily authorized you to go snooping, never mind that you might turn up something he did not expect you to find. But under what circumstances could computer forensics be employed on a non-consensual basis? And what sort of integrity does computer or electronic information require and deserve? Is extracting data from a person’s laptop akin to drilling through a precious fresco to search for lost paintings, to excavating tombs for evidence that might challenge the foundations of organized religion and modern civilization, or to exhuming the bodies of dead presidents? Surely not. But why not?

We have been combing through letters by our dead predecessors for some time. Even these, however, were meant for transmission and had, to that end, intended audiences. E-discovery, by contrast, provides access to things never meant to be received, let alone preserved or recorded. It is the tool that comes closest to revealing what an individual actually thinks, not just what he says he thinks, or for that matter, how and why he says he thinks it. Imagine retracing the Internet browsing history of President Obama, Billy Graham, Kenneth Branagh, Martha Nussbaum, Salmon Rushdie, Nancy Pelosi, Richard Dawkins, Toni Morrison, Ai Weiwei, or Harold Bloom. Imagine reading the private emails of Bruno Latour, Ron Paul, Pope Francis, Noam Chomsky, Lady Gaga, Roger Scruton, Paul Krugman, Justice Scalia, or Queen Elizabeth II. What would you find out about your favorite novelists, poets, musicians, politicians, theologians, academics, actors, pastors, judges, and playwrights if you could expose what they did when no one else was around, when no audience was anticipated, or when they believed that the details of their activity were limited to their person?

This is another reason why computer and electronic data mining is not like sifting through the notes and letters of a deceased person: having written the notes and letters, a person is aware of their content and can, before death, destroy or revise what might appear unseemly or counter to the legacy he wants to promote. Computer and electronic data, however, contain information that the person probably doesn’t know exists.

More information is good; it helps us to understand our universe and the people in it. The tracking and amassing of computer and electronic data are inevitable; the extent and details of their operation, however, cannot yet be known. We should embrace—although we don’t have to celebrate—the technologies that enable us to produce this wealth of knowledge previously unattainable to scholars, even if they mean, in the end, that our heroes, idols, and mentors are demystified, their flaws and prejudices and conceits brought to light.

The question is, when will we have crossed the line? How much snooping goes too far and breaches standards of decency and respect? It is one thing for a person to leave behind a will that says, in essence, “Here’s my computer. Do what you want with it. Find anything you can and tell your narrative however you wish.” It is quite another thing for a person never to consent to such a search and then to pass away and have his computer scanned for revealing or incriminating data.

It’s hard to say what crosses the line because it’s hard to know where the line should be drawn. As Justice Potter Stewart said of hard-core pornography, “I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it.” Once scholars begin—and the day is coming—hacking devices to find out more about influential people, the courts and the academic community will be faced with privacy decisions to make. We will have to ask if computer and electronic data are substantially similar to private correspondence such as letters, to balance the need for information with the desire for privacy, to define what information is “private” or “public,” and to honor the requests of those savvy enough to anticipate the consequences of this coming age of research.

Amid this ambiguity, one thing will be certain: Soon we can all join with Princess Margaret in proclaiming, “I have as much privacy as a goldfish in a bowl.” That is good and bad news.

The Law Review Model as a Check against Bias?

In Academia, Arts & Letters, Essays, Humanities, Law, Scholarship, Writing on October 9, 2013 at 7:45 am

Allen 2

A version of this essay appeared in Academic Questions.

Could peer-reviewed humanities journals benefit by having student editors, as is the practice for law reviews? Are student editors valuable because they are less likely than peer reviewers to be biased against certain contributors and viewpoints?  I begin with a qualifier: What I am about to say is based on research, anecdotes, and experience rather than empirical data that I have compiled on my own. I do not know for sure whether student editors are more or less biased than professional academics, and I hesitate to displace concerns for expertise and experience with anxiety about editorial bias. There may be situations in which students can make meaningful contributions to reviewing and editing scholarship—and to scholarship itself—but to establish them as scholarly peers is, I think, a distortion and probably a disservice to them and their fields.

Student editors of and contributors to law reviews may seem to be the notable exception, but legal scholarship is different from humanities scholarship in ways I address below, and law reviews suffer from biases similar to those endemic to peer-reviewed journals. Nevertheless, law review submission and editing probably have less systemic bias than peer-reviewed journals, but not because students edit them. Rather, law review submission and editing make it more difficult for bias to occur. The system, not the students, facilitates editorial neutrality.

There are several factors about this system that preclude bias. Because editors are students in their second and third year of law school, editorial turnover is rapid. Every year a law review has a new editorial team composed of students with varied interests and priorities. What interested a journal last year will be different this year. Therefore, law reviews are not likely to have uniform, long-lasting standards for what and whom to publish—at least not with regard to ideology, political persuasion, or worldview.

Law review editors are chosen based on grades and a write-on competition, not because they are likeminded or pursuing similar interests. Therefore, law reviews are bound to have more ideological and topical diversity than peer-reviewed journals, which are premised upon mutual interest, and many of which betray the academic side of cronyism: friends and friends of friends become editors of peer-reviewed journals notwithstanding a record of scholarship. The composition of law review editorial boards is, by contrast, based upon merit determined through heated competition.

Once on board, law review student editors continue to compete with one another, seeking higher ranks within editorial hierarchies.[1] Being the editor-in-chief or senior articles editor improves one’s résumé and looks better to potential employers than being, say, the notes editor. Voting or evaluations of academic performance establish the hierarchies. Moreover, each year only a few student articles are published, so editors are competing with one another to secure that special place for their writing.[2] Finally, student editors usually receive grades for their performance on law review. The result of all of this competition is that law review editors are less able than peer reviewers to facilitate ideological uniformity or to become complacent in their duties—and law reviews will exhibit greater ideological diversity and publish more quickly and efficiently than peer-reviewed journals.

Because of the ample funding available to law schools, scores of specialized journals have proliferated to rival the more traditional law reviews. Many specialized law reviews were designed to compensate for alleged bias. There are journals devoted to women’s issues, racial issues, law and literature, law and society, critical legal studies, and so on. There are also journals aimed principally at conservatives: Harvard Journal of Law and Public Policy, Texas Review of Law & Politics, and Georgetown Journal of Law & Public Policy, to name three. Specialized journals give students and scholars a forum for the likeminded. On the other hand, such journals call for specialization, which students are unlikely to possess.[3]

For these reasons, I believe that bias is less prevalent among law reviews than among peer-reviewed journals. Part of the difficulty in determining bias, however, is that data collection depends upon the compliance of law review editors, who receive and weed through thousands of submissions per submission period and have neither the time nor the energy to compile and report data about each submission. Moreover, these editors, perhaps in preparation for likely careers as attorneys, are often required to maintain strict confidentiality regarding authors and submissions, thereby making “outside” studies of law reviews extremely difficult to conduct.

And then there is the problem of writing about bias at all: everyone can find bias in the system. I suspect that institutionalized bias against conservative legal scholars exists, but nonconservatives also complain about bias. Minna J. Kotkin has suggested that law reviews are biased against female submitters.[4] Rachel J. Anderson has suggested that law reviews are biased against “dissent scholarship,” which, she says, includes “civil rights scholarship, critical legal studies, critical race theory, feminist theory, public choice theory, queer theory, various ‘law ands’ scholarship that employs quantitative or humanistic methodologies, and other scholarship that, at one point in time or another, is not aligned with ideologies or methodologies that the reader values or considers legitimate.”[5] Finally, Jordan Leibman and James White discovered bias favoring authors with credentials, publication records, or experience.[6]

Law student bias seems, from my perspective, more likely to be weighted toward credentials and reputation rather than political persuasion.[7] An established professor with an endowed chair is therefore more likely to receive a publication offer from a law review than an unknown, young, or adjunct professor; and the name recognition of an author—regardless of personal politics—is more likely to guarantee that author a publication slot in a law review. One downside to this is that student editors will accept half-written or ill-formed articles simply because the author is, for want of a better word, renowned. It is common in these situations for students to then ghostwrite vast portions of the article for the author. Another more obvious downside is that professors from select institutions and with certain reputations will be published over authors who have submitted better scholarship. This is the primary reason why I advocate for a hybrid law review/peer review approach to editing.[8]

I’ve mentioned that legal scholarship differs from humanities scholarship. What makes it different is its attention to doctrinal matters, i.e., to the application of law to facts or the clarifying of legal principles and canons. After their first year of law school, students are equipped to study these sorts of matters. They are not unlike lawyers who approach a legal issue for the first time and must learn to analyze the applicable law in light of the given facts. Although the breadth and scope of legal scholarship have changed to reduce the amount of doctrinal scholarship produced and to incorporate interdisciplinary studies, doctrinal scholarship remains the traditional standard and the conventional norm.

Law students have the facility to edit doctrinal scholarship, but not to edit interdisciplinary articles.[9] This point is not necessarily to advance my argument about bias being less inherent in law review editing; rather, it is to circle back to my initial position that inexperienced and inexpert students should not be empowered to make major editorial decisions or to control the editing. As I have suggested, student editors are biased, just as professional peer reviewers are biased—the problem is that students are less prepared and qualified to make sound editorial judgments. If what is needed is an editorial system that diminishes bias, then student editors are not the solution. Law review editing, however, provides a clarifying model for offsetting widespread bias.

It would be difficult if not impossible to implement law review editing among humanities peer-reviewed journals for the disappointing reason that law reviews enjoy ample funding from institutions, alumni, and the legal profession whereas humanities journals struggle to budget and fight for funding. Therefore, I will not venture to say that peer-reviewed journals ought to do something about their bias problems by mimicking law review editing. Such a solution would not be practical. But by pointing out the benefits of law review editing—i.e., the result of less bias due to such factors as competition and turnover in editorial positions—I hope that more creative minds than mine will discover ways to reform peer-reviewed journals to minimize bias.

 


[1]I consider editor selection flawed for some of the reasons Christian C. Day describes in “The Case for Professionally-Edited Law Reviews,” Ohio Northern University Law Review 33 (2007): 570–74.

[2]How this competition works differs from journal to journal. In some cases, the students select which student articles to publish based on an elaborate voting process supposedly tied to blind review and authorial anonymity.  In other cases, faculty decide.

[3]“Many scholars feel that student editors of law review articles, while they were perhaps once competent to evaluate the merit of scholarly articles owing to the much narrower range of topics, have for the last few decades had great difficulty grappling with nondoctrinal scholarship (that is, scholarship dealing with the intersection of law and other disciplines). The authors of law journal articles now increasingly draw from areas such as economics, gender studies, literary theory, sociology, mathematics, philosophy, political theory, and so on, making the enterprise much too difficult for a group of generally young people, who are not only not specialists, but have barely entered the field of law.” Nancy McCormack, “Peer Review and Legal Publishing: What Law Librarians Need to Know about Open, Single-Blind, and Double-Blind Reviewing,” Law Library Journal 101, no. 1 (Winter 2009): 61–62.

[4]Minna J. Kotkin, “Of Authorship and Audacity: An Empirical Study of Gender Disparity and Privilege in the ‘Top Ten’ Law Reviews,” Women’s Rights Law Reporter 35 (Spring 2009).

[5]Rachel J. Anderson, “From Imperial Scholar to Imperial Student: Minimizing Bias in Article Evaluation by Law Reviews,” Hastings Women’s Law Journal 20, no. 2 (2009): 206.

[6]Jordan H. Leibman and James P. White, “How the Student-Edited Law Journals Make Their Publication Decisions,” Journal of Legal Education 39, no. 3 (September 1989): 396, 404.

[7]Many others share this view: “It appears to be generally assumed that, to a significant degree, Articles Editors use an author’s credentials as a proxy for the quality of her scholarship.” Jason P. Nance and Dylan J. Steinberg, “The Law Review Article Selection Process: Results from a National Study,” Albany Law Review 71, no. 2 (2008): 571.

[8]See my Spring 2013 Academic Questions article, “The Law Review Approach: What the Humanities Can Learn.” I am not alone on this score. Day suggests that “this bias can be defeated by blind submissions or having faculty members read the abstracts and articles of blind-submitted articles where the quality is unknown. The names and other identifying information should be obscured, which is common in other disciplines. This is easy to do with electronic submissions. It should be the rule in law reviews, at least at the initial stage of article selection.” “Case for Law Reviews,” 577.

[9]Hence Richard Posner’s suggestion that law reviews “should give serious consideration to having every plausible submission of a nondoctrinal piece refereed anonymously by one or preferably two scholars who specialize in the field to which the submission purports to contribute.” “The Future of the Student-Edited Law Review,” Stanford Law Review 47 (Summer 1995): 1136.

Thoughts on ‘The Road to Serfdom’: Chapter 7, “Economic Controls and Totalitarianism”

In Arts & Letters, Austrian Economics, Book Reviews, Books, Conservatism, Economics, Epistemology, Essays, History, Humane Economy, Humanities, Justice, Law, Libertarianism, Literature, Philosophy, Western Civilization, Western Philosophy on October 2, 2013 at 8:45 am

Slade Mendenhall

Slade Mendenhall is an M.Sc. candidate in Comparative Politics at the London School of Economics, with specializations in conflict and Middle Eastern affairs. He holds degrees in Economics and Mass Media Arts from the University of Georgia and writes for The Objective Standard and themendenhall.com, where he is also editor.

The following is part of a series of chapter-by-chapter analyses of Friedrich Hayek’s The Road to Serfdom, conducted as part of The Mendenhall’s expanding Capitalist Reader’s Guide project. Previous entries can be found here: Introduction, Chapter 1, 2, 3, 4, 5, and 6.

In “Economic Control and Totalitarianism”, the subject of Hayek’s seventh chapter, we find him at his best, with a clarity and reason that we have not seen since chapter two, “The Great Utopia.” In chapter seven, Hayek expounds upon numerous themes within the titular subject: the inextricability of dictatorial control and economic planning, the fallacy of believing that economic controls can be separated from broader political controls, the inevitability in a planned economy of controls extending to individuals’ choice of profession, and the interrelation of economic and political freedom. What aspects of the chapter we might find to criticize arise either from a desire for him to take his line of thinking a step further than he does or already established mistakes carried over from previous chapters. Despite a few minor missteps, however, Hayek’s chapter is, overall, an exceedingly positive contribution.

He begins by stating what is, to many self-deceiving advocates of socialism, a jarring observation: that planned economies, following their natural course, ultimately always require dictatorial rule. “Most planners who have seriously considered the practical aspects of their task,” Hayek writes, “have little doubt that a directed economy must be run on more or less dictatorial lines” (66). Without fully restating the argument here, Hayek implicitly rests upon the description of this tendency that he spelled out in chapter 5, “Planning and Democracy”: power in a planned system gradually consolidates into a central committee or single dictator as a matter of organizational efficiency, with a decisive central leadership winning out over the gridlock and inefficiencies of a democratic body. The point is as valid and well made here as it was then.

Where Hayek expounds upon this is in refuting one of the false promises often made by planners as they reach for the reins of a country’s economy: “the consolation… that this authoritarian direction will apply ‘only’ to economic matters” (66). Contrary to the suggestion that controls will be limited to economic affairs, Hayek asserts that economic controls in the absence of broader political controls are not simply unlikely, but impossible. Rather than simply detailing in a typical way the interrelationship of economic and other activities, Hayek acknowledges the inseparability of the two, writing, “It is largely a consequence of the erroneous belief that there are purely economic ends separate from the other ends of life” (66). He later elaborates:

“The authority directing all economic activity would control not merely the part of our lives which is concerned with inferior things; it would control the allocation of the limited means for all our ends. And whoever controls all economic activity controls the means for all our ends, and must therefore decide which are to be satisfied and which not. This is really the crux of the matter. Economic control is not merely control of a sector of human life which can be separated from the rest; it is the control of the means for all our ends” (68).

Hayek’s point is, in the context of modern economic education, a largely underappreciated and mishandled one. Economics instructors have, with time, lost the important skill of contextualizing economic interests within the broader scope of other human pursuits, instead treating them either as abstract ideas toyed with in a vacuum without real-world ramifications or preaching the ‘economics is everything’ doctrine to the exclusion of other analytical tools and frameworks.

Hayek, whether by virtue of writing at a time less bound by such false dichotomization of the field or simply due to his exceptional qualities as an economic thinker, successfully avoids both traps. “Strictly speaking,” he writes,

“there is no ‘economic motive’ but only economic factors conditioning our striving for other ends. What in ordinary language is misleadingly called the ‘economic motive’ means merely the desire for general opportunity, the desire for power to achieve unspecified ends. If we strive for money it is because it offers us the widest choice in enjoying the fruits of our efforts” (67).

Hayek rightly acknowledges money as a profoundly empowering economic good, calling it “one of the greatest instruments of freedom ever invented by man” that “opens an astounding range of choice to the poor man, a range greater than that which not many generations ago was open to the wealthy” (67).

Chapter seven goes on to briefly characterize the pervasiveness of central planning, and its propensity to spread to all areas of a society. Hayek recognizes that the much-eluded question of socialism-versus-capitalism is not simply one of which decisions individuals are to make for their lives, but whether the decision is to be theirs at all:

“The question raised by economic planning is, therefore, not merely whether we shall be able to satisfy what we regard as our more or less important needs in the way we prefer. It is whether it shall be we who decide what is more, and what is less, important for us, or whether this is to be decided by the planner” (68).

Those on both sides of the aisle in the United States today, who fail in so many matters to appreciate the distinction between individuals choosing the right thing for their lives and a government official imposing their choice (be it right or wrong) upon them, would do well to heed Hayek’s warning. Modern American political thinking, caught between an increasingly authoritarian left (taken directly from Marx and Rousseau, or updated via modern incarnations like Krugman, Sunstein, and Stiglitz) and a right that has yet to extend its limited government spirit to all areas of economics—much less censorship and social issues—has a great deal to learn from an Austrian economist’s words written some seventy years ago.

One element of central planning that utopian-minded young socialist idealists evade is that labor, being an input, must, in a controlled economy be as controlled as any other good—if not more so. This does not mean simply the control of wages or the maintenance of union. Ultimately, it means government control over the quantity of individuals in a given profession, conducted in the interest of keeping wages in a given field high and ensuring that there is an adequate supply of expertise to meet all of the economy’s needs. This means at some point dictating who can and cannot enter a given field of work.

Hayek writes,

“Most planners, it is true, promise that in the new planned world free choice of occupation will be scrupulously preserved or even increased. But there they promise more than they can possibly fulfill. If they want to plan they must control the entry into the different trades and occupations, or the terms of remuneration, or both” (71).

How many young socialists on college campuses across the country would not object to being torn from their chosen course of study and compelled to study for degrees in which they had no interest, to spend their lives in careers they did not love? That is the fate that they ask for, whether they recognize it as such or not. Would they accept it willingly? Would they “become a mere means, to be used by the authority in the service of such abstractions as the ‘social welfare’ or the ‘good of the community’” (72), bowing their heads subserviently to spend a life on a path that was chosen for them, for the good of society? Perhaps some. And perhaps others would recognize the nature of what they profess to believe in and renounce it. Either way, it is a reality that should be presented to them in those terms by those who see socialism for what it is.

Towards the end of the chapter, Hayek makes several key observations that would prove all the more true in the decades after his writing.  He notes the decline of references by advocates of socialism to the functional superiority of socialism. Gradually witnessing their system being discredited, but doubling-down on their dogma, the socialists of the mid-20th century came to look less and less like those of the early 20th century, who believed in the system as a technically superior model for society. Instead, their arguments turned egalitarian in nature,  “advocat[ing] planning no longer because of its superior productivity but because it will enable us to secure a more just and equitable distribution of wealth” (74). Little did Hayek know how far that trend would go with the rise of the New Left and its legacies, stretching up to the present and the current American administration.

Finally, in another point that has proven all the more true since the time of his writing, Hayek recognizes that the extent of planning proposed by socialism, empowered by modern modes of control, is that much greater than the control and subjugation that occurred under the days of monarchy and feudalism. In reading it, one is brought to wonder how much greater that mechanism of control is today, with NSA surveillance, a growing regulatory state, and ever more executive agencies maintaining armed units to impose their rules, than at Hayek’s writing in 1943.

Hayek’s seventh chapter is a valuable and, for the same reasons, saddening one for the way that it makes us reflect upon the applicability of his words and ideas to our current political environment. Though our current condition is far from totalitarian in nature, the same principles apply, to a lesser extent, in all areas where government intrudes to control markets, alter incentives, or provide special advantages to some at the expense of others.

Human beings are rational animals. We respond to the incentives around us. In the presence of a government that seems increasingly, explicitly willing to toy with those incentives to alter our behavior to suit models and ideals for our lives that are not our own, how much do we lose that we never knew we had? In what ways are our options limited? Need it be by a government edict that tells a young man who would study to be a doctor that doctors are no longer needed, and he should apply to be an engineer instead? No. It may be as subtle as inflating the price of his education through government loan programs, regulating the field he seeks to enter, and subjecting him to entitlement programs that tell him that his life’s work is not his own; that he works and exists in the service of society as a whole. And at that point, the difference between our condition and the ill fate that Hayek describes becomes one not of kind, but of degree.