May 2011

Description and Explanation of Law
By Anonymous on May 02, 2011

In 1927 the U.S. Supreme Court decided Buck v. Bell, a case which ranks among the most infamous in the Court’s history. In Buck the Court ruled that “feebleminded” persons could be forcibly sterilized against their will for the benefit of society. Writing for a nearly unanimous Court, Justice Holmes wrote that “It is better for all the world if, instead of waiting to execute degenerate offspring for crime or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind . . . Three generations of imbeciles are enough.”

Descriptive social scientists are (allegedly) in the business of saying why certain social phenomena occur as they do. Why did the American Revolution succeed? Why does birthrate tend to fall in industrialized countries? Why do economic recessions happen? Why did the Court decide Buck as it did? The answers to these questions can take various forms, and in this post I’ll briefly defend the idea that articulating (what I will call) the “norms of reasonableness” of a society or institution can be particularly useful in answering the question of why judges decide cases as they do.

Judicial activity, as it is traditionally understood, is a reason-giving practice. Judges have to justify their actions with reasons which their audience finds satisfactory. However, what counts as a “satisfactory” reason can differ significantly between times, places, and institutions. Judges in a particular social context are constrained (and enabled) by the fact that other judges, the legal community, politicians, elites, and the public find a certain set of reasons persuasive and salient. Call the norms which people follow (as a matter of fact) in reasoning “norms of reasonableness,” a label I cannot defend in any great detail here. Looking for such norms does not commit us to relativism; it is perfectly clear that what people do find persuasive is not necessarily (and much of the time, is not) what they should find persuasive. As social facts, “norms of reasonableness” can be just about anything. History supplies many examples of social groups which accepted reasons which we have very good reasons to reject – for example, antebellum views about slavery and black inferiority in the U.S. This list of bad reasons which people have accepted is very long, and examples are not hard to find. However, articulating the shared ground of reasoning – the norms of reasonableness which people accept as reasonable, even if they are not reasonable in fact – is an essential methodological approach for describing and explaining the activity of judges.

This is because, as I said above, judges are in the business of giving reasons. More than members of any other government branch, judges have to give reasons for what they do. Because reasoning-giving is so essential to the office of a judge, judicial decisions are shaped (framed, influenced, constrained, empowered) by what passes as a good reason according to judicial practice. It is my contention that looking closely at the reasons judges give, and how others respond to or criticize these reasons, grants us insight into the shared norms of reasonableness of both the legal profession and the society more broadly. Knowing these norms can aid us in understanding why judges decide as they do.

Which brings us back to Buck. Buck sounds appalling to contemporary ears and has been justly criticized for the past several decades. Perhaps for these reasons it comes as a surprise to know just how “reasonable” the decision was in its own time. The New York Times celebrated the “Right to Protect Society” in its coverage of the case; other newspapers were generally supportive or did not criticize the decision. Virtually all of the law review articles which treated the case were neutral toward or in favor of the decision; a writer for the Lincoln Law Review dryly stated that the Court “approve[d] the procedure about which there was no very stout contention.” An article published in the Columbia Law Review argued that “the classification set up by the statue [under consideration] is not unreasonable, [and] evinces a liberal attitude.” The Court decided the case 8-1, and the Pierce Butler, the lone dissenter, kept his reasons to himself. Though not everyone thought eugenics was a great idea, much of the legal community and press thought that the decision was reasonable, or at least not grossly unreasonable.

Making sense of this support requires that we understand scientific views about “feebleminded” persons during these times, e.g., that they were inclined to sexual promiscuity, laziness, crime, alcoholism, and various other social ills. It is also important to recognize that feeblemindedness was considered by many to be inheritable according to the laws of genetics (and therefore, the way to stop it from “spreading” was to stop certain people from reproducing). These ideas, combined with views about progressivism and the relative strength of the “police power” to legislate for the common good, made a case like Buck not only possible but, to many people, appropriate – it conformed to the standards of law, science, and reason which many people thought relevant to the question.

Caution is necessary in the search for norms of reasonableness, to be sure. We must not generalize too broadly, and must always remember that there are many different ways of thinking in any particular social context. Particular judges might be significant outliers compared to the rest of the population. However, reasons given by judges do constitute a certain kind of evidence – they are evidence of what a certain judge, occupying a legitimate position of public authority and trust, thought his audience would accept as good reasoning (further, unless the decision is repealed, these reasons become the reasons of the state). If we combine the reasoning of a particular judge with reasons given by other judges in similar cases, comments from other legal professionals, discussion of the reasons given in the popular press and by politicians (if available), and so forth, we can gain a fairly accurate picture of what “people around here” accepted as good reasons for the question at hand. And knowing this can help us better understand why judges decide as they do.

Why I Am Not Endorsing the Purple Health Plan
By Anonymous on May 03, 2011

A couple weeks ago I came across The Purple Health Plan, an attempt to bridge the gap between economists on the right and the left with regard to the Obamacare and medical care in the United States. It is signed by many prominent economists including Nobel Laureates George Akerlof, Edmund Phelps, Thomas Schelling, William Sharpe, and Vernon Smith. Two of the economists, Edmund Phelps and Vernon Smith, tend to lean more to the right and are more market oriented than the others. Akerlof, of course, is famous for developing the lemons problem and market failure due to asymmetric information which clearly does occur in the case of medical care. Most patients know very little and have to assume that their doctors actually know what they say they know or that they are telling them everything they actually do know.

Before continuing let me say that I wouldn't be so critical of Obamacare if it wasn't so biased in favor of government intervention. Yes, the market has failed in may areas, but the market has also succeeded in developing many medical innvocation and drugs that we export to the rest of the world. Even though government money is used for over fifty percent of medical expenses which has been increasing through time, the private sector is blamed for all the failure in the healthcare market. I argue that a private healthcare market cannot be to blame since the United States is far from having such a market. If the government were willing to make some simple reforms such as taking away the tax break for employers to provide healthcare insurance so that healthcare would be separated from employment and allowing insurance companies to compete across states, many problems associated with our healthcare market would be avoided. If such reforms were to fail, then maybe I would be more open to government interventions. The problem with government inverventions is that once they're in place they are virtually impossible to get rid of. In fact, if they fail, they will usually get bigger.

I have read the basic principles of the Purple Health Plan and most of them I find to be reasonable. It suggests that health care should be privately provided and that consumers should be able to choose their doctors and the hospitals they visit. The government's healthcare costs much be strictly capped. There should be incentives to prevent overuse of healthcare. And malpractice should be reformed to prevent frivolous and costly lawsuits.

There are things that don't convince me, but are not deal breakers. There is one statement that prevents me from signing, "Health plans should be affordable regardless of one's pre-existing health conditions or risk." It is very unfortunate that  some people develop expensive health conditions and do not have means to pay for these due to lack of health insurance or to the fact that the insurance won't cover these conditions due to the costliness of the treatment. And one could argue that it is the moral obligation for the government to cover these individuals. However, what if I changed the sentence as follows: "Life insurance plans should be affordable regardless of one's pre-existing health conditions or risks." Is it really that much different? This means that once the doctor tells me that I am going to die, I would be able to call a life insurance agent and he would have to charge me the same $30/month premium  for  $750,000 policy that he would have charged me the day before when the data suggested I was perfectly healthy. If the law were changed that allowed consumers to force the hand of the insurance agents in this manner, life insurance would cease to exist as we know it and since life insurance is a "right" the government would step in to cover where the market "failed." Of course, this would lead to much less generous life insurance payouts and much higher taxes. The vast majority of individuals would be worse off in order to provide a benefit to a small few. The same issue results in health care markets. Doctors will provide worse service, the government will tell health insurance companies what they can and cannot cover, and overall quality of health care would decrease.

So here's the question. Do unhealthy individuals truly have a right to affordable health insurance even if it implies that everyone else will pay higher premiums and receive lower quality health insurance? Or is the right of doctors, individuals, and insurance companies to freely associate supersede such a right? Is there a medium between these two extremes? Your thoughts are welcome.

Atlas Shrugged on Film
By Dr. Terry Hunter Baker Jr. on May 05, 2011

Christians and many conservatives have a deep ambivalence about Ayn Rand that probably draws as deeply from the facts of her biography as from her famous novels. When the refugee from the old Soviet Union met the Catholic William F. Buckley, she said, "You are too intelligent to believe in God." Her atheism was militant. Rand's holy symbol was the dollar sign. Ultimately, Buckley gave Whittaker Chambers the job of writing the National Review essay on Rand's famous novel that effectively read her and the Objectivists out of the conservative movement. The review characterized Rand’s message as, "To a gas chamber, go!" Chambers thought Rand's philosophy led to the extinction of the less fit.

In truth, the great Chambers (his Witness is one of the five finest books I've ever read) probably treated Rand's work unfairly.

Read more.

(more…)

Expectations
By Anonymous on May 07, 2011

 

Annie stared at me nervously through her large wire-rimmed frames as I told her she could begin. Quickly, she glanced down at her cupped hands temporarily tattooed with scrawled notes in blue ink. I gave the fifth-grader a supportive smile and nod and she began her National History Day presentation in the gym of a local elementary school.

Started by Dr. David Van Tassel in 1990, National History Day was created with the hopes of transforming students into historians—helping them understand that history is not something you read or memorize but something you do.  According to their website, close to half a million students across the United States participate in this program each year that teaches “critical thinking, writing and research skills,” prepares “students for college, work and citizenship,” and inspires students “to do more than they ever thought they could.” (www.nhd.org)

As I tried to get comfortable in a chair that was clearly built for someone half my size, I wasn’t expecting much. After all, Annie’s teacher reminded me before I began my journey through the maze of poster board and construction paper that “they’re just fifth graders” and to “go easy on ‘em.”

Annie announced (while anxiously grinding one of her Chuck Taylors on top of the other) she would be discussing the Civil Rights Movement. I glanced at her display, peppered with random quotes from various activists, a map of important sites, and an endearing depiction of a Montgomery bus with cutout faces of Malcolm X, Rosa Parks, Martin Luther King, Jr., and Ella Baker peaking through the bus windows, as she began.

Students present their research, discuss pertinent primary sources, and answer questions posed by the judge.  I was impressed. While Annie’s confidence lacked in the beginning—referring to her tattooed hands frequently and avoiding eye contact at all cost—it was clear this young girl knew her stuff.  She mentioned how strong the fire hoses were in Birmingham (ripping the bark off trees), discussed Woolworths sit-ins as well as the lesser-known wade-ins of Florida, summarized the “I Have a Dream” speech and noted the successes of the Civil Rights Act of 1964 and Voting Rights Act of 1965.

[“This is a fifth grader?”]

My approval must have become evident. Annie stopped looking at her hands.

The questioning began:

“I see that you only used King’s “I Have a Dream Speech” in your presentation. If you had more time, what other primary sources would you incorporate into this project?”

 “Well, I remember reading when King was in Birmingham’s jail that he wrote a letter to the local clergymen about how they could no longer wait for their constitutional and God-given rights. Probably that one.”

[“This is a fifth grader?”] 

Surprised (stunned) and encouraged, I continued with questions. This lasted several minutes until the elementary-sized chair (and the agitated glares of several ten and eleven-year-olds waiting) reminded me I had been sitting at this particular station too long. I decided to sum up with a “recommended” (and paraphrased) NHD question:

“What lesson did you learn from your project that you could apply to today?”

Raising her head (she was clearly ‘in the zone’) she responded, “I think this project tells us that regardless of race or anything like that, we should judge people not by the color of their skin but by the content of their character.

With my head tilted and eyebrows furrowed (I may have also dropped my pen—but I don’t remember), I responded excitedly, “Did you just quote King to me?”

She just smiled. But it was a very big smile.

[“This is a fifth grader?”]

I shook her hand and congratulated her on her presentation.

I returned to Annie's teacher to ask where I should go next.

“Did she do OK?” she asked tentatively.
“She was marvelous.” 

But I wasn’t able to get across just how impressed I was by Annie’s performance. I was inspired. Did she realize Annie was capable of this? She needs to be reading more difficult text! More primary sources! She needs to be challenged!

“In fact” I added, “she’s better than some of my students.”

I stopped. Was Annie better? Was she that exceptional? And if she was…was it my fault?

How often do I read freshman survey papers, particularly non-majors, and tell myself, “they’re only college freshman…go easy on ‘em”? Or, spend an entire class breaking down one of the Federalist papers because students enter class justifying the reason they didn’t finish the reading was because it was too difficult. At what point (in fifth grade or college) do you raise the bar and expect more?  Regardless of age, the amount students learn decreases when teachers do not encourage and expect excellence. While Annie’s confidence was strengthened that day, mine was shaken. Am I demanding enough?  Do I position students to “do more than they ever thought they could?” Are my brightest students being challenged? How do we provide a stimulating and engaging learning environment for all of our students?

While I plan to re-haul my survey course (yet again) for the fall, I’ll be sure to include the letter from Birmingham Jail and the “I Have a Dream” speech.  And, if I catch flak, I’ll tell them about Annie.

Evaluating Form and Content in Student Papers
By Joseph DiLuzio on May 09, 2011

With the end of the semester comes the grading of student papers, an onerous process that takes much of our time and energy.  I’m not talking about exams here: I mean term papers, many of them, several pages in length.  At some point in the process, I inevitably find myself asking why I bother to write comments and, in particular, why I bother to correct students’ grammar.  This post offers a justification of sorts.  In it, I wish to defend my belief in giving equal weight to form and content in the evaluation of student papers.  This entails scrutinizing a student’s ideas as well as his or her command of the English language.    

In the contemporary academy, there is little tension between philosophy and rhetoric.  Philosophy reigns.  As academics, we concern ourselves primarily with thought – the content of an argument – rather than the quality of its presentation.  As evidence, most academic writing and most conferences fail to attract a wider audience, and it is not just because our work requires a certain level of expertise to be understood.  How often do we attend a public lecture and find ourselves riveted by the speaker?  How often do we find ourselves nodding off?  This, I would argue, is a product of modern “philosophy,” by which I mean a focus on content to the exclusion of eloquence.  This problem is not new: Cicero traced its origin to fifth-century Athens, to the teachings of Socrates as presented in the works of his student, Plato.  In his dialogue On the Ideal Orator (de Oratore), Cicero – through the character of Crassus – accuses the Athenian philosopher of separating “wisdom” into its constituent parts – knowledge and expression.  In Plato’s Gorgias and Phaedrus, rhetoric (the art of persuasion) comes in for sustained criticism, and at the same time, Socrates argues that rhetoric cannot function normatively in the absence of philosophy. The Phaedrus, in particular, suggests that orations and the written word are ultimately less effective at persuasion than (philosophical) dialectic. 

Four centuries later, Cicero composed his response in two moves.  First, he splits the difference between philosophy and rhetoric – thought and expression – by saying that knowledge of both is essential for the ideal orator to practice “eloquence.”  Second, he posits that philosophy entails speaking about virtue, morality as well as the laws of nature and the state.  Crassus concludes, “the real power of eloquence is so enormous that its scope includes the origin, essence, and transformations of everything.”  Philosophy requires language and persuasion to be effective, just as rhetoric requires the true knowledge of philosophy.  In theory, both pursuits are of equal value, but as Crassus notes, philosophers tend to shun politics in order pursue their studies in peace.  Having no use for it, they scorn the practice of speaking, thereby exchanging the true wisdom of old for knowledge alone.    

A similar phenomenon plagues the modern academy.  Ironically, one of the justifications routinely offered for a humanities education is that, unlike the sciences, it promises to teach students to think critically and write well.  In other words, we claim to teach wisdom, but what we actually value is thought.  How can we claim to teach students to write well and fail to address poor punctuation, incorrect spelling, and improper usage?  A major contribution of twentieth-century thought was the realization that the meaning of a word is relative rather than essential.  Successful communication requires writer and reader to share a common system of signs and a context that includes the same physical, psychological, and social environment.  In other words, the sharing of ideas requires a common grammar, and apart from this grammar, ideas have no significance.  When evaluating student papers, our written comments should be intended to enhance the significance of their ideas by addressing (in as much as they can be distinguished) issues of form and content. 

In addition, the language an author uses says something about his “character.”  By “character” (ethos in Greek), I mean the image he conveys of himself to his audience.  The Greek theorists maintained that ethos was essential because of the correlation between the perceived credibility of the speaker and the receptiveness of the audience.  So it is with the written word: poor punctuation may be viewed as evidence of carelessness and imprecision; muddled syntax may be construed as muddled thought, and improper usage, ignorance.  Moreover, how we evaluate student papers says something about us as teachers.  It signals to the student what our standards are, whether attention to detail is important to us, and whether we ourselves take the assignment or our class seriously.  As such, how we evaluate a paper is as important as the final grade.

A Utilitarian's Indirect Plea for the Liberal Arts
By John von Heyking on May 12, 2011

Gwyn Morgan is the retired founding CEO of Encana Corporation, one of North America’s largest natural gas suppliers.  He has also established himself in the media as a gadfly critic of the inefficiencies of universities, especially their business of teaching the liberal arts.  His outstanding accomplishments in business and in public life make him well qualified to comment on the characteristics and skills necessary for people to succeed in the economy and as citizens.  His basic critique is that universities are producing too many liberal arts majors who end up under-employed, and producing too few engineering, information technology, and health care graduates which are fields in which employers have great difficulty finding employees.  Universities need to shift resources toward these vocational programs from the liberal arts, which don’t seem very useful.

(more…)

Martin Diamond, American Political Thought, and Liberal Education
By Jason Jividen on May 14, 2011

  I have recently been re-reading As Far as Republican Principles Will AdmitEssays by Martin Diamond.  Thinking about the mission of the ALA blog, I found Diamond’s “On the Study of Politics in a Liberal Education,” particularly illuminating.  Here Diamond discussed a persistent problem facing those who wish to teach American political thought in light of the idea of a truly liberal education.  That problem lies in the perceived tension between studying the particular ideas and concerns of one’s own regime, one the one hand, and the unlimited and universal aspirations of liberal education, on the other. 

 

For those of us who spend any time with students of the history of political philosophy, it’s not uncommon to hear that the study of American political ideas is the study of second-rate ideas.  That is to say, the study of the political thought of American statesmen and other figures in American political history is simply not the stuff of political philosophy.  According to some, we should not spend too much time worrying about the writings and speeches of mere politicians or statesmen, intimately connected to a single regime in a single time and place.  Rather, we should focus our attention on the writings of the political philosophers proper, those who seek to transcend the particular to understand the universal, timeless truths about political life and the human condition as such.  To take American political thought too seriously is to labor in the minor leagues.

 

Diamond faced this kind of criticism head on and argued that one’s ascent to a truly liberal education must begin from a healthy examination of one’s own time, one’s own place, and one’s own regime.  Diamond reminds us that we all begin our liberal education with a natural inclination to love what is nearest to us.  According to Diamond, “the natural starting point for learning to love what is just and noble is the love of one’s own” (278).  To ascend from unexamined to examined opinion requires that we begin by examining our own opinions about the just and the noble, and to move toward an appreciation and love of what is just and noble everywhere and always.  As Diamond suggested, “The ascent from opinion to philosophical knowledge—the final aim of liberal education—should begin with proper reflection on what in one’s own is worthy of love.  This means an inquiry into what is just and hence truly lovable in one’s own country, an inquiry which points the student to the task of perfecting his own regime and, ultimately, to the question of what is simply just” (278). And, as any student of his writings knows, Diamond came to believe that there might really be something excellent, something worth loving, about the American regime.

 

Insofar as a political education is part of a liberal education, any attempt to acquire knowledge of the whole of politics must begin with an attempt to acquire knowledge about one’s own politics.  In American political thought, particularly as expressed in the rhetoric of American statesmen, we encounter fundamental questions about the nature and purpose of American democratic government.  We find fundamental questions that cut to the heart of our core assumptions about what American democracy is and what it ought to be.  We come closer to knowledge in weighing the alternative answers to such questions.  One need only consider, for example, questions raised by the Declaration of Independence, The Federalist, the Lincoln-Douglas debates, the writings of Woodrow Wilson, or the speeches of FDR as evidence of this.  These questions are instructive, both on their own merits, and in the sense that they often point us toward larger questions about human nature, the potentialities and limitations of human wisdom, and the nature and purpose of government as such. 

 

Having taught courses in political philosophy and American political thought the last several years, I’ve tried to consider serious and timeless political questions with my students by beginning with their received opinions about their own political life.  Recalling Aristotle’s Ethics, Diamond observed that “politics cannot properly be taught unless the student brings to the study a decent stock of received opinions and habits.”  The liberal study of politics “must start from convictions in order to be elevated to philosophic questions” (278).  Diamond described a pedagogical approach that begins by questioning what students hold to be good about their regime.  But, as indicated by his seminal work recovering the serious study of the American founding, Diamond also recognized that one might have to begin from the other extreme, from the various criticisms of the American regime as anti-democratic, reactionary, inefficient, or perhaps even “low but solid” (if not simply low). 

 

In my experience, it seems more likely today that one might have to begin from the premise that the American regime (or at least the Founders’ Constitution) is old and outdated, or just plain rotten.  But this we can work with. The task is more daunting when students think they ought to appear to lack what Diamond found so crucial to liberal education:  a “decent stock of received opinions and habits.” Some days are spent just trying to convince students that it is permissible in my classroom to have an opinion.  But it’s surely worth the effort.

Alienating Our Rights
By Matthew Parks on May 16, 2011

Abraham Lincoln told an Indiana regiment in 1865 that there was one group for which he was willing to tolerate slavery even after the Civil War was over: slave owners who argued it was a “positive good” for the slaves. Though Lincoln believed all slavery violated the most fundamental natural rights, he was willing to let this group “try it on for themselves.”  Beneath Lincoln’s obvious irony was a serious point. Those who had no objection to making one man work for the gain of another couldn’t reasonably complain if they ended up being the ones working rather than the ones gaining. This was the essential contradiction in the south’s passionate complaints about the violation of their property rights.  In a certain sense, they had given up their natural right to property by denying the connection between effort and reward in the labor of the slave.  In another sense, of course, this was not really possible. A true natural right inheres in a man as a man, regardless of whether he knows it or agrees with the principles that justify it.

Most contemporary Americans are, without realizing it, in a position similar to that of the Civil War-era slavery advocates. While often speaking passionately about rights, at least when what they perceive as their own are involved, they have abandoned all the philosophical premises that support them. Let me illustrate the point from my experience teaching political science at several colleges and universities. On the first day of an introductory course in law one semester, I surveyed the class on a number of questions concerning the foundations of law.  I found that while about 90% of the students believed that “human beings have certain rights that every society should recognize,” only about 20% believed that “there are fixed principles of right and wrong that are binding on all people at all times” or that “certain ideas are true for all people in all times.”  Obviously, unless “there are fixed principles of right and wrong” the “should” in the statement about rights is meaningless and unless “certain ideas are true for all people” it could not be the case that “every society” ought to protect “certain rights.” You can’t have universal rights and no universal truths or moral principles. And if the possession of natural rights does not stand or fall with the degree of intellectual inconsistency that pervades a society, the enjoyment of those rights certainly may. 

Of course, this could all just be a matter of confusion. Today’s students grow up in an environment where universal tolerance is the greatest virtue and “who are you to judge?” the accepted answer to any moral claims. And, in fact, a few years later when I administered a similar survey both at the beginning and the end of the semester, it turned out that the class as a whole had become both more consistent in their thinking and more willing to embrace universal truth (primary credit belongs to Hadley Arkes and his Natural Rights and the Right to Choose, the key text for the class).

But confusion cannot explain the adoption of similar views as constitutional orthodoxy by the Supreme Court, as embodied in its notorious Casey v. Planned Parenthood (1992) decision: “At the heart of liberty is the right to define one's own concept of existence, of meaning, of the universe, and of the mystery of human life. Beliefs about these matters could not define the attributes of personhood were they formed under compulsion of the State.” Here in two sentences (approved, in the same survey referenced above, by almost 90% of my class) the court denies the three essential premises that are at the heart of the Declaration of Independence’s defense of natural rights: 1. that there are identifiable, fixed qualities of human nature possessed by all mankind; 2. that these qualities exist by the purposeful decree of our common Creator; 3. that among these qualities is a Creator-endowed dignity expressed (at least in part) in the possession of the particular rights enumerated. Instead of God-established rights for God-created men, we have self-asserted rights for self-created men. And this, according to the Supreme Court, is what it means, under the law of the United States, for Americans to have a right to liberty.

Yesterday (April 29), the DC Circuit Court of Appeals overturned an earlier federal court injunction preventing federal funding for embryonic stem cell research (a stay on that ruling had allowed for funding to continue during the appeals process). As a result, there is nothing in the way of the full implementation of President Obama’s 2009 executive order and resulting NIH guidelines placing the government squarely on the side of such research.

This policy is striking for its embrace of Casey-style liberty at the expense of true natural rights. The key step to securing NIH approval for research on a new stem cell line (derived from “unneeded” embryos produced by in vitro fertilization—would-be “test tube babies”) is the “informed consent” of the donors. About one-third of the document outlining the new guidelines is devoted to making informed consent a practical reality. Donors are protected from any influence judged to be improper—there can be no promise of payment or benefits for donating and no special appeals by would-be researchers. All relevant information about the research process and its consequences must be disclosed to the potential donors and they must be given the opportunity to rescind their consent until the research is actually performed. Nothing, in other words, can be permitted to get in the way of their perfectly free choice.

This new set of guidelines is eminently reasonable, even responsible, from one standpoint: Certainly, the idea of a couple creating human embryos to sell to the highest-bidding scientist is chillingly repulsive, as are high-pressure, utopian presentations by researchers to couples attempting to decide what to do with “their” embryos. These restrictions, however, do nothing about the real problem: Once the perfect conditions for “informed consent” are achieved, any couple, now regarded by law as the sovereign “creators” of these human beings in the earliest stages of development, is free to turn its progeny into the involuntary subject of a fatal experiment. There is no consent, informed or otherwise, given by the microscopic party in this case—and no limit to the power imposed upon him or her. The real problem, then, is not in the procedure by which parents may dispose of their offspring, but in the fact that parents may dispose of their offspring—and ultimately in the fact that any human being has the arbitrary power of life and death over another.

As a result of this new policy and yesterday’s court ruling the federal government stands firmly on one side of an ethical question that is at the heart of republican politics: Are human beings—whether one, a few, or many—able to do as they please with those who depend on them or are they bound by moral norms that exist beyond anyone’s “consent?” Our republic was built upon the latter premise—and its survival depends upon our return to it.

Note: The concluding paragraphs of this essay are adapted from Keeping Our Republic: Principles for a Political Reformation by Matthew Parks and David Corbin.

Zinn and the Art of Misguided History
By Hyrum Lewis on May 18, 2011

Howard Zinn's a People's History of the United States is one of the bestselling and most influential U.S. history books ever written and one can understand why: Zinn writes with clarity, passion, and a thorough knowledge of the subject.  But setting aside perennial historiographical questions of objectivity, fairness, and explicitly using the past to advance a political agenda in the present (as Zinn does), there are, as I see it, six fundamental problems with Zinn's book that make it unworthy of its influence and status.

First, Zinn’s work is plagued by a fundamental self-contradiction.  He consistently argues that America’s government is evil—controlled by scheming politicians and greedy plutocrats who take America into wars of plunder and imperial conquest.  Yet at the same time, he also consistently argues that we need to cede more power to the American government in order to achieve economic justice.  Why he wants to radically strengthen a body that is so fundamentally corrupt is a question he never answers.

Second, Zinn falls into the error of making utopian comparisons.  Although modern medicine has not yet cured cancer or AIDS, few of us whine and run to witch doctors to treat our ailments.  We understand that, despite its imperfections, scientific medicine is the best option in an imperfect world. Yet Zinn constantly denounces liberal capitalism for its imperfections without realizing that, as Churchill pointed out, it is the least bad among many bad options. We live in an imperfect world of imperfect people and imperfect choices (indeed, limited resources and scarcity demands this); thus Zinn holds capitalism and the United States to an impossible standard.

Third, Zinn constantly emphasizes the distribution of wealth, but never mentions its precondition—the production of wealth.  Zinn has much to say about needing to achieve a more equitable distribution of society’s economic resources, but he takes those resources for granted, not realizing that the forcible distribution of wealth can have adverse effects on production, thus reducing the amount of goods available to distribute.

Fourth, Zinn fails to account for the law of unintended consequences.  He favors policies “intended” to help the poor, powerless, and discriminated against, but never bothers to find out if the outcomes actually match the intentions. He lauds the Great Society programs of the 1960s because they were designed to help the poor, but does not care if they actually worked.  Having the right intentions makes one virtuous, Zinn seems to suggest, but this is an all-too-common ethical flaw, and it is glaringly on display in People’s History.

Fifth, Zinn commits The Zero Sum fallacy.  An unstated assumption of his book is that the rich are intrinsically evil because they must have gotten rich at someone else’s expense.  This not only explains Zinn’s hatred of the rich, but also his hatred of America.  He believes that since America is rich it can only have gotten that way by making other nations poor. Economic science has long since debunked the Zero Sum Fallacy, but Zinn still clings to it and his history suffers as a result.

Sixth, and finally, The Zero Sum Fallacy leads Zinn to the fallacy of “Might Makes Wrong.” Zinn’s People’s History is a simplistic Manichean tale of the powerful villains fighting against the powerless heroes, in which the weaker party is always right and the stronger party is always wrong (again, this is why the actions of America, the most powerful nation in history, are almost always evil in Zinn’s mind).  All Humans possess virtuous and malicious tendences and belonging to a particular group or class does not give anyone a free pass to goodness or evil.  Might does not make right, but nor does might make wrong.

I've long said that bad history is worse than no history at all.  For the above reasons, Zinn's book is bad history and if people have a choice between reading Zinn’s People’s History or reading no U.S. history, they should do the latter. 

Fostering Intellectual Curiousity in Vocational Minded Future Teachers
By Timothy L. Simpson on May 20, 2011

How do you introduce and encourage theoretical pursuits to vocational minded future teachers? This is a central and vexing question for me each semester. I teach foundations of education to undergraduates in the College of Education. For those not familiar with “foundations” of education, it is the study of the philosophical, historical, economic, political and cultural contexts of education. Its goal is to produce students who understand these dimensions and their impact on the aims, practices and policies of education. In my current course, I begin with the Puritans, discuss Jefferson and the founding of the United States of America, link to the Common School Era and the Progressive era and move forward in the history of US education to include the No Child Left Behind Act of 2001. (See my current Foundations of education syllabus here. To be changed in Fall 2011.) I view my responsibility as raising questions and guiding students to reflect on their unexamined opinions about education and teaching.

My efforts to execute my responsibility, however, face several obstacles from several different sources. First, my students. I wish that my students, future teachers mind you, arrived in class with intellectual curiosity and wonder and were open to the “big” questions of regarding their future vocation. The reality is that most of my students look upon teaching as a job and the College of Education as another kind of vocational school. Thus, what they want is practical advice. “How do I get Johnny to sit down?” “How do I teach Mary addition?” These are vital questions for any future teacher, but they miss a larger point, “Why are we in school to begin with?” It is to this larger question that I want to draw students and I provide them Aristotle, Jefferson, Dubois, Van Doren, Dewey and others to stimulate such an intellectual pursuit, but too often (not always) it leaves students asking, “Why do I need to know this theory or this person? I cannot use it to teach.”

Unfortunately, my college and institution (like probably most in the United States) does little to support my efforts. My class is one of only two education courses that students take which I would consider primarily “theoretical” (the other being Educational Psychology that deals with theories of learning). With a handful of content courses, the majority of education courses are methods courses or “how-to” teach courses. Thus, my own college and institution support a kind of vocational orientation in students. My college and institution, however, are simply following the guidelines provided by state education agencies which also privilege a “practitioner” approach to producing teachers. In a recent reform of Masters of Arts in Education, Kentucky (which is not alone in this) created a “Teacher Leader Masters” program and required all state colleges and universities in Kentucky to adopt a set of principles for the program. These principles favor courses and programs designed to have immediate, practical results in the classroom without requiring a broader reflection on the aims of education or what constitutes an “educated person.” (To my college’s credit, my colleague and I voiced concern over this absence, argued for the need of a foundations of education course, and won approval for a core course in the program.)

So, within this vocational orientation context, how do you present theoretical questions and instill an intellectual curiosity that goes beyond mere practicality? I believe that my students come to class with preconceived ideas about education and teaching. It is my responsibility to raise those preconceived ideas up for reflection and discussion. Throughout the course, I attempt to raise fundamental questions of education, such as “What is education?,” “What is the purpose of education?,” “What is an educated person?,” “What is a citizen?,” “How can we produce an educated person and a good citizen?,” and “What should we learn and how?”  We use primary sources, such as the Declaration of Independence, Jefferson’s Notes on the State of Virginia, Dubois’ The Souls of Black Folk, and Dewey’s My Pedagogic Creed and many others to generate these fundamental questions.

To this point in my career, I’ve had moderate success. A few students each semester understand the grave importance of these fundamental questions and are changed as a result. But, as perhaps all teachers, I want more. I want more students to feel the weight of these fundamental questions and their potential impact on their lives and future vocation. It really does matter to your life and your future educational policy and practice what you believe is, for example, an educated person. To promote a greater sense of urgency and passion in my future teachers, I am taking a new direction in terms of readings, but not in terms of expectations.  I plan to begin with Richard Mitchell’s The Gift of Fire to establish a sense of seriousness in the course; these questions really do matter to our lives. Then I will set up a conversation between traditional and progressive approaches to education through reading Plato’s Republic and John Dewey’s Experience and Education. (Is there really any better book than the Republic to establish the sense of importance regarding fundamental questions of how we should live?) We will then turn to Dewey’s critics such as William Bagley, Robert M. Hutchinsand I. L. Kandel and finish the course with William Kilpatrick’s Why Johnny Can’t Tell Right from Wrong, a severe indictment of contemporary teaching of moral education. My hope is that these readings may better initiate students into the world of intellectual curiosity and assist them in understanding the importance of these questions to their own lives and future profession. I assume that the ISI Summer Institute will assist me in this endeavor and look forward to learning a great deal from other scholars. I certainly seek other suggestions, recommendations and advice on how to achieve this critical goal for my students and the students of these future teachers.

Prev 1 2 Next