Saturday, June 26, 2004
Nonunanimous and less-than-12-member criminal juries:
There's been a lot of discussion about the implications of Blakely v. Washington (decided last Thursday) for various sentencing schemes; I have nothing to add to that. But I did want to flag one issue: What does Justice Scalia's opinion suggest about nonunanimous and less-than-12-member criminal juries?
When most people think of the right to trial by criminal jury, I suspect they think of unanimous 12-member juries. But the Supreme Court has held that the constitution generally does not require either a 12-member jury, or that the 12-member jury be unanimous. (Smaller 6-member juries do have to be unanimous.) Oddly, the unanimity requirement has been applied to federal juries, because of a one-Justice concurrence by Justice Powell in the early 1970s. (This is one of the few ways in which the Bill of Rights has been applied differently to states via the Fourteenth Amendment than to the federal government.) For the key cases on this, see Apodaca v. Oregon, Johnson v. Louisiana, and Williams v. Florida; the nonunanimous jury cases were 5-4 (Justices Douglas, Brennan, Stewart, and Marshall dissenting), and the less-than-12-member jury case was 8-1 (Justice Marshall dissenting). In practice, as I understand it, the unanimity requirement is the norm in nearly all states, and most states use twelve-member juries (at least in fairly serious cases). But according to the Court's precedents, that's not a constitutional command.
Justice Scalia's majority opinion in Blakely, though, twice quotes Sir William Blackstone's 1765 formulation of the jury trial right as providing for "the unanimous suffrage of twelve of his equals and neighbours." Both times, the opinion mentions this while discussing what the Jury Trial Clause of the Sixth Amendment (incorporated against the states via the Fourteenth Amendment) commands.
Now the Court is of course only talking about the scope of the jury trial guarantee -- the extent to which it applies to sentencing factors -- and not the size or voting requirements of the jury. Many or all of the Justices in the majority might not have seriously considered whether the "unanimous suffrage of twelve" requirement really should be a constitutional command.
Still, the opinion, especially its references to Blackstone, does stress the importance of the guarantee's original meaning, and does take the view that the Sixth Amendment generally constitutionalizes the common-law jury trial right. This suggests that at least some Justices -- and perhaps a majority -- may be willing to revisit the nonunanimous jury issue (which, as I mentioned, was 5-4 when the Court decided the matter, and on which a change of course would affect only a few states) or even the jury size issue.
Incidentally, for whatever it's worth, at least one news account (by David Savage in the L.A. Times) also thought the unanimity reference was important, though without discussing the details I mention above.
Many thanks to the guest-bloggers:
Many thanks to Cass Sunstein, Glen Whitman, and Neal Whitman for guest-blogging while I was out of town — I much appreciated their posts, and I hope you did, too. Please check out Glen's and Neal's own blog, Agoraphilia. (I'd recommend Cass's blog, too, except he's one of the few law professors who still don't have one . . . .)
Blogging for Monday and Tuesday:
I got over 50 responses to my Jews and "Jewish People" post; I hope to post some brief thoughts on the subject Tuesday (though unfortunately I won't be able to respond individually to each message).
Several people also pointed out that the author of the Slate Kerryisms column has posted a response to various criticisms of the column, including my own criticisms. I'll blog about that Tuesday, too. The response was posted last Tuesday, but I was out of town at a conference from Wednesday to Friday, and thus couldn't react more promptly.
The Supreme Court will be announcing its decisions on Monday, including possibly the military detention cases (Hamdi and Padilla), the Guantanamo cases, and the latest cyberporn case (Ashcroft v. ACLU II), though it's possible that it will save some of these for later in the week. I hope to blog quite a bit about them Monday, which is why I'm saving the other items until Tuesday. (If the Court decides to announce some of the cases Tuesday, I'll bump the other stuff until Wednesday or later.)
Judge Calabresi apologizes:
InstaPundit has excerpts from some news articles on this. I think an apology was indeed warranted (see here and here).
But as I told the New York Sun reporter who first broke the story, I think this is all that can be done and all that needs to be done here. It would surely have been much better if Judge Calabresi hadn't made his statements; but now that they have been made, an apology is the only sensible remedy.
Some people say that he should recuse himself from various cases involving the Bush Administration, but I doubt that this is really called for: The statements didn't tell us more than we already know about many judges, which is that they really don't like George W. Bush -- if we recused all judges who had strong political sentiments from all politically charged cases, we wouldn't have a lot of judges left. Nor do I think that the comments were egregious enough to warrant some more formal reprimand from the court. The apology was thus both the right thing to do, and the tactically smart thing to do. You can accomplish a lot with a prompt apology.
Incidentally, if anyone has a pointer to a copy of the letter of apology, I'd love to post it; I doubt it has anything much more enlightening than what the newspapers say, but I think it's generally good to have the original documents as well as the media-filtered versions.
FDR's Incomplete Success
With or without written constitutions, all nations have constitutive commitments, some codified in some form, others just widely understood as such. Randy Barnett asks, rightly, the sense in which these are "commitments" and "constitutive." They're commitments in the sense that they're taken (politically, that is) to be binding, and not to be subject to change with the political winds. The United States is firmly committed to some kind of social security program, in a way that it's not committed to particular appropriations, or the Toxic Substances Control Act, or Head Start, or Americorps, or the Superfund statute. These commitments are "constitutive" in the sense that they help define, and hence constitute, the nation's self-understandings. The self-understanding of the United States would not allow it to accept a proposal to nationalize the automobile industry or to repeal the laws forbidding racial discrimination by private employers. (If you don't like these examples, choose your own; there are many other possibilities)
The idea of constitutive commitments doesn't serve an argumentative purpose, so far as I can tell, but it might help illuminate a nation's political and even legal culture as it changes over time. It might also help clarify what particular debates are really about. In 1970, the Aid to Families With Dependent Children (AFDC) program lacked constitutional status, but it was a constitutive commitment, and some judicial decisions appeared to be influenced by that fact (the decisions involved statutory interpretation, not constitutional law). By 1990, the AFDC program was a mere policy. Changes in the other direction are also common; the Americans With Disabilities Act has probably moved, in a short time, into the category of the constitutive commitment (not its particular provisions, but the general idea).
FDR wanted the Second Bill of Rights to stand as a constitutive commitment. While it would be wrong to say he failed, he didn't really succeed. The nation is committed, at least in principle, to some of the rights he listed (eg the right to free from domination by monopolies) but not to others (including my least favorite, the farmers' rights provision, which fits uneasily with the right to be free from domination by monopolies),
Thanks much to Eugene for the forum, and to emailers for the many excellent comments, criticisms, and suggestions; I've learned a lot.
Friday, June 25, 2004
For longer than I care to admit, I've been working on an economic theory of suicide
. (If you actually read the paper, I encourage you to skip the empirical section, which still needs lots of work.) Other economists have tackled the issue of suicide, though the field is still dominated by psychologists and sociologists.
Oddly, no economic papers that I know of directly address the question of how suicidal persons choose their suicide methods (though some have addressed it indirectly, by modeling suicidal persons as choosing their probability of death). Yet the choice-of-methods question lies at the center of public policy debates. For instance, would restricting access to guns reduce the number of suicides? Gun control advocates usually say, "yes, obviously," while opponents say, "no, because people who want to kill themselves will just find another way." Who's right? I say neither.
The innovation of my approach is to treat the suicidal person as engaging in a search for suicide opportunities, akin to a job-seeker's search for job opportunities. Suppose a job seeker gets a lousy job offer. Should she take it now, or should she turn it down and continue the search? Naturally, the answer depends on how likely it is that she'll get a better job later. The lower the chances of a good offer later, the more likely she is to settle for the lousy offer now.
Similarly, a suicidal person can be characterized as engaging in a kind of passive search for opportunities to die. Suppose he has the chance to kill himself by a non-preferred method - say, overdosing on pills. But he would prefer to kill himself with a gun (over 70% of male suicides use guns). Whether he's willing to use the pills now depends, among other things, on his chances of getting the opportunity to use a gun later. The lower the chances of a gun later, the more likely he is to use the pills now.
Counter-intuitively, the argument indicates that a policy decreasing the availability of a suicide method (like guns) could actually increase the suicide rate. But to be fair, the model's predicted effect is ambiguous - the suicide rate could go up, go down, or stay the same. On the one hand, suicides increase as people substitute into less preferable but more readily available
methods. On the other hand, suicides decrease because they will have fewer opportunities to use their more preferred methods. Which effect predominates? A priori, we can't say. It depends on the strength of the subject's preference for one method over another, the magnitude of the drop in availability in the preferred method, the availability of less preferable methods, and so on.
Past studies have tried to find a statistical connection between availability of guns and suicide rates, with little success. Some show a connection between guns and gun
suicides, but few if any show a connection between guns and total
suicides. What's going on? The natural response is the old "people who want to kill themselves will find a way" theory. But that theory is implausible, because plenty of evidence shows that suicidal persons have strong preferences over suicide methods. The notion that people will casually switch from one method to another relies on the assumption that they regard different methods as perfect substitutes. I don't buy it.
My theory provides an alternative explanation for the absence of a statistical connection between guns and suicide - an explanation that depends upon, instead of denying, people's preferences about methods. People do prefer some methods over others, but they will expand the set of methods they're willing to use at the margin in response to changes in perceived availability. As a result, any change in availability creates offsetting effects that will at least partially cancel out.
In the future, I hope to blog about the applications of my suicide model to terrorism. But since my blogging stint here at the VC is almost over, you'll just have to visit Agoraphilia
to get that part of the story. I want to thank Eugene for the invitation to blog here - it's been great fun, and the feedback has been excellent.
In my posting on almost-recursive acronyms, I noted that the company Cygnus, whose name expands to "Cygnus, Your GNU Support," was not guilty of what I referred to as acronym stacking. This is the name I give to an acronym including a letter that abbreviates a different acronym; as I like to think of it, the first acronym is stacked on top of the second one. The first stacked acronym to catch my attention was the name of an issue-oriented political group called ACT-UP, an acronym for "AIDS Coalition to Unleash Power." Aside from the awkwardness of the phrase unleash power for the sake of having a meaningful acronym, my complaint was that you couldn't tell what the A stood for. Yes, it stood for AIDS, but the A in AIDS stands for acquired. Shouldn't this group more properly be known as AIDSCT-UP? Hard to pronounce, sure, but that's not my problem. If you want to make a clever acronym, you still have to play by the rules; that it's difficult to do is no excuse. It's the same kind of aesthetic that goes for sonnets or haikus. And Cygnus beats ACT-UP in this regard, because its namers were able to create an interesting acronym that respected the acronym of GNU by incorporating it whole into the company name.
That's my prescriptive take on how acronyms should be. From a descriptive standpoint, I'd say that if an acronym (such as AIDS) can be abbreviated by its initial letter, that's just an indication of that the word has become so thoroughly ordinary that speakers hardly remember that it's an acronym. I don't know how I'd test this hypothesis, since there are so few stacked acronyms to begin with, but that's my suspicion.
David Price sent me a good example of a stacked acronym, which is also an example of an almost-recursive acronym. He writes:
I was a little surprised to read your post on Volokh this evening, since I'd just been discussing this very kind of acronym a few days ago with one of my fellow interns at the Electronic Frontier Foundation (EFF).
EFF puts on an annual free concert in San Francisco, called the "EFF Freedom Fest." The name of the event, I'm sure you've noticed, can also be abbreviated as "EFF." The 'E' in this acronym expands to "EFF", but *that* "EFF" stands for "Electronic Frontier Foundation."
"EFF" as an abbreviation for "EFF Freedom Fest" is therefore both almost-recursive and a product of acronym-stacking.
And now that we've wandered back onto the subject of recursive acronyms, Jonathan Ichikawa gave another example in a comment: A Dilbert cartoon in which Dilbert and Wally talked about "The TTP Project." This is another example that's interesting for two reasons. One is that it's a recursive acronym with a letter from the middle (i.e. the second T), not from the front, giving rise to the recursion. The other is that it's a redundantly expanded acronym which is redundantly expanded on both ends. Of course, since it was intended to be a joke, I can't really count it as a real linguistic example, but it was fun nonetheless.
And speaking of "it was fun," I've enjoyed guest blogging here this week. Thank you, Eugene, for the invitation, and Conspiracy readers for your feedback. My future posts will not be appearing on Agoraphilia, but on Literal-Minded ("linguistic commentary from a guy who takes things too literally"), the linguistics-related blog that I'm starting. Cheers!
WORTH TEN SECONDS OF YOUR TIME:
I'm always skeptical about whether petitions - especially online petitions - ever have any effect whatsoever. They're probably even less meaningful than online polls, which is to say, hardly meaningful at all except as a measure of how much publicity they got in certain circles. But at least I agree with this one
, which says the presidential debates should be organized by a non-partisan (rather than bipartisan) commission. The aim is to improve the chance of third-party candidates being included and, as Roderick Long
puts it, "to make the debates less like press conferences and more like actual debates
The Real Tragedy of Bill Clinton for Democrats:
Jack Balkin offers an insightful analysis of why Bill Clinton was reviled by so many Republicans:
Clinton understood that the Democrats could get back in the White House if they appealed to parts of the coalition of voters that had elected Ronald Reagan and George H.W. Bush. And so he set out consciously to do that. He fractured the existing winning coalition by producing a combination of economic policies designed to appeal to middle class voters while accepting certain elements of the values agenda that had played so well for the Republicans. He focused on issues like crime and welfare, emphasized his populist roots and religious sensibilities, while at the same time maintaining strong ties to secularism, feminism, and civil rights. In this way Clinton threatened to create a new winning coalition by borrowing the rhetoric of his political opponents and becoming a more "Republican version" of a Democrat.
My purpose here is not to enter into a discussion of why some conservatives hated Clinton (and I will refuse to do so). Instead, I note Jack's observation because it is ALSO the reason why Clinton was reviled by the more left-wing constituents of the Democratic base. A Democrat with less personal charisma would have been abandoned by his alienated base, but they were forced to like it or lump it.
Indeed, what made Bill Clinton a potentially transformative president like FDR and Reagan (and so threatening to Republicans) was precisely his personal appeal to the electorate. Like FDR and Reagan, he was simply not beholden to the disparate elements of his own party coalition for his electoral success. They
were beholden to him
for delivering them to the White House, and potentially beyond (and many in his party resented him for it at the time).
All this is why I believe Clinton's personal weaknesses were so tragic, not for him but for his party, and why he was not the transformative figure he might well have been. When Clinton's character weaknesses crossed the line to leave him both politically and legally vulnerable to his political opponents, he then was forced to abandon this centrist strategy of triangulation and to rely on his base to save his presidency, thereby ending, at least for now, any transformative move by Democrats towards the center.
Even if you deny that he moved left, it is undeniable that the party, after Clinton, abandoned the strategy that Jack so accurately describes in the above quote, thereby preventing it from exploiting the strategy that Clinton proved so electorally effective. Had Clinton not been so slippery about the truth, or had kept his sexual activities outside the vicinity of workplace harassment (about which he could be deposed) or sexual subordination, he would surely have remade the Democratic Party in his image.
As it is, the Democrats have now reverted to their pre-Clinton pattern, with an extra-strength dose of 60's antiwar revivalism for good measure. Note that Hillary is not participating in this tack to the left in foreign policy, and the Democratic base is giving her the same pass they gave Bill, albeit with more enthusiasm perhaps because she is not dissing them on domestic policy and they trust her more. Query, was Hillary really to the left of Bill as she appeared to her supporters and detractors alike, or was she merely the "good cop" to Bill's "bad cop" to Democrats and vice versa to Republicans. (What a team they made!)
Now there is one huge counter argument to Jack's story and mine: Bill and Hillary's early commitment to a radical reform of the health care industry. Now I know it was supposedly short of a "single payer system" and therefore supposed to be "moderate." But it was a major policy shift to the left and, regardless of how reasonable it may have appeared tactically ex ante
, it was largely responsible ex post
for the Republican take over of the House of Representatives in '94, long before the Clinton personal scandals gained any real traction. I am not sure what to make of this twist. Perhaps Clinton's triangulation strategy only became honed and effective with someone like Newt Gingrich as his foil. In other words, perhaps Bill Clinton's strength was in tactics rather than strategy. I am open to suggestion about this.
Regardless of how the first two years of his administration is interpreted, however, the tragedy remains the same. The difference between Bill Clinton and both Al Gore and John Kerry is that Clinton's political skills transcended his party's coalition, whereas (to date) both Gore and Kerry are entirely dependent upon that coalition and therefore not free to maneuver in a manner that would capture the broad middle of the electorate.
If I am right about all this, then two further mysteries remain. Why did the Democrats, including the left, close ranks around Clinton when his ouster would have meant the elevation of Al Gore to the presidency, clearly giving Democrats a substantial edge going into the 2000 election? Clinton's claim that impeachment was a partisan coup has always neglected how it was against the interests of Republicans to pursue his removal from office successfully--and perhaps this is why they failed in the end to convict him, though this outcome was hardly so likely that Republicans could count on it and it was Republican "moderates" who saved him. Perhaps Democrats are so viscerally averse to permitting any Republican victories that they would cut off their political noses to spite their foes? I really do not know.
The second mystery is why Democrats, especially those on the left, remain so loyal and affectionate towards the man who either betrayed their principles (when triangulating) or betrayed their chance to form a winning coalition (by his solely personal and sexist self-aggrandizement). Why is he not in disgrace with them, just as Gingrich lacks standing with the most partisan Republicans for having "blown it" big time. Is Clinton simply forever "The Man Who Shot Liberty Valance" by winning back the White House while driving conservatives nuts? Or is there some underlying schism in the Democrat psyche? Again, I really do not know.
Perhaps Jack does.
DEPENDENCY? IT DEPENDS ON YOUR DEFINITION:
For the coming school year, home-schooled debaters will be tackling the resolution, "That the United States should change its energy policy to substantially reduce its dependence on foreign oil." I'll be speaking at a home schoolers' debate camp later this summer, which gives me an incentive to become better informed about this important subject.
But for now, I'll present my initial reaction to the wording of the resolution itself - specifically, the ambiguity of the word "dependence." There are two obvious ways of defining dependence: first, in terms of the gross amount of oil we import from abroad (or from a particular region); or second, in terms of the fraction of our oil that we import from abroad (or from a particular region).
The difference matters, because some policies will decrease one form of dependence while increasing the other. Suppose, for instance, that a new energy policy succeeded in substantially reducing American demand for oil. Other things equal, the outcome would be a reduction in the world price of oil. As a result, the marginal oil wells - those that were profitable at the higher price but unprofitable at the lower price - would have to shut down. And where are such oil wells located? Mostly outside the Middle East, in places like the United States and the North Sea. Oil is incredibly cheap to produce in the Middle East, for a variety of reasons. So in the new equilibrium, a larger fraction of the world's oil production would take place there. America would be importing a smaller amount of oil (reducing our dependency under definition 1), but a larger fraction of the oil we still consumed would come from abroad, and from the Middle East specifically (increasing our dependency under definition 2).
So which definition should we care about? There's not a clear answer; it depends on the real goal. Suppose that our goal is to deprive Saudi Arabia and other terrorist-breeding states of oil profits. The proposed policy would decrease the total profits of the oil industry (because of the lower price), but Middle Eastern countries would sell a larger share of it. Put simply, the Middle East would get a larger slice of a smaller pie, with an ambiguous overall effect on profits (at least based on theory alone - better information could possibly allow a more precise prediction). Remember that the next time someone tells you that driving an SUV helps fund the terrorists.
(Incidentally, this post is not intended as a criticism of the debate resolution. Ambiguity of this kind can be interesting fodder for debate.)
Why do "Constitutive Commitments" Matter?
I believe that Cass is describing a very real social phenomenon. There surely are, and have always been, some positions or "norms" that are sufficiently "beyond the mainstream" as to disqualify someone from national office. These norms are "effectively binding" in the sense that one publicly disavows them at one's peril. While these norms remain stable over time, they can also be contested and eventually supplanted. All this sounds perfectly reasonable and plausible as a description of social norms, but I am still left wondering:
(a) In what sense is such a norm accurately described as a "commitment
(b) In what sense is such a norm "constitutive
(c) What argumentative purpose is served by invoking the concept of "constitutive commitment"? That is, what does it add to otherwise familiar normative or descriptive claims about rights, law, justice, etc.?
Clarifying these three matters would go a long way to helping me better understand the normative and/or descriptive claim Cass is making about the Second Bill of Rights.
Update: Larry Solum offers his take on constitutive commitments here.
Taking FDR Seriously
Thanks to Randy Barnett, and assorted emailers, for excellent questions and comments about constitutive commitments and FDR's Second Bill of Rights. Constitutive commitments first: They're not part of the formal constitution and they're certainly not for judicial enforcement. They nonetheless matter, because they have sufficiently wide and deep political support that they're effectively binding -- unless and until there's a major transformation in public values.
It would be nice to have a clear sense of necessary and sufficient conditions for constitutive commitments, but lacking these, let's make a rough first cut: A constitutive commitment is in place if over a significant period of time, a presidential candidate could not seriously question that commitment without essentially disqualifying himself. This means that the commitment must have both wide and deep support (and not just among academics, elites, or the media; a strong political majority is needed). We can imagine hard intermediate cases and the definition leaves ambiguities; but the prohibition on racial discrimination in employment, the antitrust laws, and some kind of social security program are evident examples -- and so too, I think, with a ban on the nationalization of industries and on federal taxes above a certain rate (eg Kennedy-era levels). Any nation will have some constitutive commitments that some reasonable people will reject; and reasonable people sometimes get those commitments to change over time.
On the sense in which FDR meant his Second Bill to contain "rights": He wasn't much of a theorist (Trotsky famously criticized him for just that reason; "Your President abhors 'systems' and 'generalities'"), and he saw (positive, in the sense of legally protected) rights as instruments for protecting the most important human interests. Randy asks whether the Second Bill should be seen as protecting "natural rights." To say the least, the natural rights tradition has multiple strands; a good contemporary version is elaborated by Amartya Sen (see his Development As Freedom). A possible position: If we believe that human beings have certain rights by virtue of their humanity, it's plausible to say that those rights include a decent chance to achieve well-being by their own lights and also a minimal level of security if, for one or another reason (eg disability, illness, atrocious luck), that chance is not enough. Roosevelt's focus was on decent opportunities and minimal security, and while his Second Bill of Rights was an innovation, he can claim clear antecedents in Montesquieu, Blackstone, and even Madison.
New Civil Justice Reform Blog:
The Manhattan Institute has launched a new civil justice reform blog, PointOfLaw.com. I will be a contributor, posting occasionally on issues such as the admissibility of expert testimony (while I'm on-topic, if your law firm doesn't have a copy of The New Wigmore: Expert Evidence, now is the time to correct that oversight). My posts on PointOfLaw are likely to be longer, more complex, and perhaps of less general interest than my V.C. posts. Here's an excerpt from the MI press release:
PointofLaw.com is a web magazine sponsored by the Center for Legal Policy at the Manhattan Institute that brings together information and opinion on the U.S. litigation system. Focusing on America's civil justice system, the website includes original discussions featuring some of the nation's top legal scholars, an ongoing forum on liability issues, a bibliography of important books and articles, and links to topical legal news stories. There is no subscription fee.
It is no secret that America is an increasingly litigious place. (Its tort liability system, for example, consumes more than two percent of its gross domestic product, a higher percentage than in any other developed nation.) And as the role of civil justice grows, so does the demand for reliable, timely information and opinion about it.
PointofLaw.com intends to satisfy that demand. Aimed both at experts on civil justice and at newcomers to the field, the magazine incorporates two major components:
Forum - continually updated blog by a group of distinguished contributors, containing thoughts, opinions, news and more
Library - archives of articles, books, and news items, selected and recommended by the editors of PointofLaw.com—an invaluable resource for learning about the civil-justice system
Both components can be organized by topic, so that researchers, journalists, students, and policy-makers can quickly find information on the following issues and others:
Another highlight is the monthly Featured Discussion, in which two experts exchange views on a topic of interest. Next month, Walter Olson of the Manhattan Institute and Michael Krauss of George Mason University School of Law will initiate this series by discussing federal legislation to stop lawsuits against firearms manufacturers.
The magazine's editors are Walter Olson and James Copland, both of the Manhattan Institute. Bloggers and contributors include David Bernstein, Ted Frank, Stephen Bainbridge, Lester Brickman, Michael DeBow, Richard Epstein, Michael Krauss, and Richard Painter.
ECONOMY AND CULTURE:
Last week, I attended a Liberty Fund conference on "Liberty & Diversity," where I met historian David Beito
and conspirator David Bernstein
, among many others. Interestingly, I was the only economist in the bunch, which may have given me a peculiar perspective (it usually does). These are some of my thoughts, which I expressed at the conference, on the question of whether the interests of cultural groups might justify some kind of state involvement in meeting them. (Just to be clear, I don't claim these thoughts are terribly original; this has been Tyler Cowen's bailiwick for quite a long time.)
Many forms of cultural involvement have the form of an investment, since they involve some personal expenditure of time, money, or effort in return for the benefits of group membership. Examples include becoming fluent in a language that's not in everyday use, learning how to perform a certain dance or make a particular food, or relocating in order to live in closer proximity to other members of the group.
The simple laissez-faire argument for keeping the state uninvolved is that individuals can decide for themselves whether the benefits of cultural involvement are great enough to justify the costs. If a culture withers, it's because individuals aren't gaining enough from the culture to make it worth their while - in which case the culture should
wither, just as unwanted consumer products get discontinued.
The problem with that argument is that there may be some cultural products that involve positive externalities, perhaps even rising to the level of public goods (in the economic sense
of that term). Suppose my investment in a cultural product - say, relocating to a certain neighborhood - produces benefits to everyone in the group, including myself. But if my private benefit from relocating is smaller than my private cost, I will choose not to contribute, even though the total
benefits to everyone involved exceed my private cost. I might choose to free-ride by visiting the neighborhood from time to time, enjoying its ethnic ambience, and then heading back home. If too many people follow their private incentives in this way, the community never develops (or doesn't develop as much as it could), and we are all worse off.
The standard ECON 101 remedy for a public good or positive externality problem is to propose some form of state support, such as subsidies for cultural activities (perhaps in the form of a rent discount). But taking ECON 102 reveals some important objections to the standard remedy:
First, there is the problem of distinguishing between free riders
and honest hold-outs
. If I refuse to contribute to the community, maybe it's because I'm selfishly collecting the benefits while dodging the costs - or maybe I don't actually value the community that much (or at all). Many people seek to escape their cultural backgrounds, partially or completely, because they actually prefer aspects of other cultures. In some cases, the alleged public good might even be regarded as a "public bad" by subsets of the group (e.g., women in a male-dominated culture). Also, the proposed policy might burden people who were never members of the culture in the first place; this would certainly be true of a subsidy financed out of general government revenues.
Second, it's not true (as some textbooks claim) that public goods never get provided privately. People can be incredibly clever in finding ways to provide public goods without coercion. Probably the best known example is the financing of broadcast TV through advertising. Private housing developments, which create a means of excluding non-contributors, transform public goods into "club goods." The open-source software movement (which, admittedly, I know little about) is arguably yet another example of private provision of public goods. The absence of state intervention need not spell the death of a cultural public good, as long as people are clever enough to search for other institutional devices.
Third, there's the public choice objection: any potential market failure in production of cultural goods has to be weighed against potential government failure if the state becomes involved. There is no reason to think state actors will be neutral arbiters of the value of cultural goods; rather, they will be influenced by the all-too-familiar phenomenon of special interest rent-seeking. Even if a cultural practice does not
meet the criteria for being a true public good (say, because non-contributors can be excluded from the benefits), that will not prevent group members from lobbying for subsidies and special protections at the expense of the rest of society, including other cultural groups.
INACTIVITY IN HISTORICAL PERSPECTIVE:
Apropos of my post
on the virtue of inactivity in politics, reader Carl Edman emailed me the following quotation:
"[Titus Antoninus Pius's] reign is marked by the rare advantage of furnishing very few materials for history; which is, indeed, little more than the register of the crimes, follies, and misfortunes of mankind."
— Edward Gibbon, The Decline and Fall of the Roman Empire, vol I., ch. 3, pt. 2
Watch my Backformation
Someone I know once told me,
- I like to peoplewatch."
In May 2003, a gossip column had a quotation from Ashton Kutcher, concerning a party that George W. Bush's daughters had attended at his place in 2001. Kutcher said,
- The Bushes were underage drinking at my house.
These two sentences caught my ear because of the verbs in them: peoplewatch
and underage drink
. These verbs are a particular variety of one of my favorite word-formation processes, backformation. Backformation is the reverse of adding an affix (i.e prefix or suffix) to a word (or if we're not just talking about English, doing any kind of derivational operation on a word, whether it's affixing, or repeating a portion of the word, or changing the vowels, etc.). Instead of the affixed word coming into existence after the original word, the affixed one is
the original word, and the un-affixed version comes later.
For example, consider first an ordinary case of derivation: the adjective sexual. This is derived from the noun sex by adding the suffix -ual. Now consider the adjective homosexual, and complete the following old-style SAT analogy:
sex : sexual :: ? : homosexual
The answer is homosex
, a well-attested noun meaning "sexual activity between members of the same sex." The adjective came first, not the noun, though in a hundred years I'm guessing most English speakers will assume it was the other way around. (Just as they do with the verb edit
, which actually entered the language after the noun editor
Examples (1) and (2) above are a special case of backformation that I've started to notice. They are backformations resulting from this sequence of events:
First, a noun form of the verb, i.e. gerund or agentive noun, is combined with some other word to make a compound word. The other word could be a noun that would ordinarily appear as the verb's direct object, as in peoplewatching (gerund) or peoplewatcher (agentive). Or the other word could be an adjective modifying the noun, as in underage drinker.
Second, a reanalysis of the compound word occurs, such that [A [B C]] is reparsed as [[A B] C]. Continuing with the earlier examples, [people [watch er]] becomes [[people watch] er], and [underage [drinker]] becomes [[underage drink] er].
Third, the actual backformation itself:
watch : watcher :: ? : [[people watch] er]
drink : drinker :: ? : [[underage drink] er]
And now we end up with the new verbs people-watch
and underage drink
, as evidenced by the fact that we see them in infinitives (to peoplewatch
) and finite verb forms (that is, verb forms with a tense, such as past progressive, as in were underage drinking
My enthusiasm was not shared when I brought these examples up in an introductory linguistics class I taught last year. One student dared to dispute my claim that underage drink was being used as a verb in (2). Yeah, yeah, she said, it's appearing in a finite verb form all right, but the thing is, in English gerunds and progressive participles sound the same! Ashton Kutcher might have said, "They were underage drinking" just because he's familiar with the term "underage drinking" and doesn't know or care about the difference between a gerund and a participle. He would never, this student argued, say something like "She underage drinks all the time." In short, underage drink may be a verb in a technical sense, but it's not a verb with full rights and privileges of ordinary verbs. She had a good point.
So just now I did a couple of Google searches to see what I found for simple past and 3rd person singular present forms of underage drink and peoplewatch. I found fewer than 50 hits for underage drank, underage drinks, and peoplewatched; zero for peoplewatches. (Compare this to 190K for underage drinking and 450K for peoplewatching.) For some people, at least, these new verbs are starting to do more of the things that verbs can do, but these words have a long way to go before they totally fit in.
My final thoughts on these backformations is that there is an even more special subclass of them: those whose source verb is transitive. Peoplewatch is an example. Aside from seeing these verbs in infinitives or in tensed forms, I've found one more thing that (I claim) immediately gives away that a backformation has occurred: It has to have happened if a new direct object can appear after the verb, taking the place that used to be reserved for the direct object that now appears in front. Here are a couple of examples:
- We can fact-check your ass! (heard in several places)
- We don't just cherrypick the best ones. (heard on a radio commercial)
I hypothesize that any time you see this kind of competing direct-object structure, you will find that the backformed verb has achieved full verb status, and can be used with all tense/person/number suffixes.
Thursday, June 24, 2004
Affirmative Action at Harvard:
Via Matthew Yglesias, this interesting N.Y. Times story discussing the fact that a disproportionately large percentage of "African American" students at Harvard (and other elite colleges) are actually of recent African or West Indian heritage, or are of mixed race.
One interesting aspect of the article is that those who argue that Harvard should be taking more descendants of American slaves do so on the grounds that Harvard has a special moral obligation to help such individuals. The president of Amherst College, for example, is quoted as saying that by not admitted blacks students with predominantly American roots, "colleges are missing an 'opportunity to correct a past injustice' and depriving their campuses 'of voices that are particular to being African-American, with all the historical disadvantages that that entails.'"
One major problem with this argument, not addressed in the article, is that while racial preferences for "diversity" purposes are legal under Grutter v. Bollinger, racial preferences for remedial purposes are not. And it would be hard to argue that, say, the fifth African American student from Harlem adds more "diversity" to a class than the first recent Gambian immigrant. Any college (are you listening, president of Amherst?)that preferred "American" black students over black immigrants would likely be violating the law.
This is just one more example of the perverse consequences of the "diversity" argument for racial preferences. (Another perversity is that a white immigrant from a wealthy family in Peru gets a "Latino" preference, while a poor kid from Appalachia or from a poor Irish immigrant family does not.) While not everyone, to say the least, agrees with the remdial rationale for affirmative action/racial preferences, it is at least coherent, and, as any reasonable preference policy should, suggests targeting preferences as those whose ancestors suffered most, and whose communities currently need the most help. The Supreme Court, if it's going to uphold the legality of racial preferences (and I've argued in You Can't Say That! that it's obligated to do so, at least for certain ideologically driven private schools), should abandon the diversity rationale in favor of the remedial rationale.
Taking "Constitutive Commitments" Seriously:
In his recent post
, Cass invokes the concept of a "constitutive commitment," by which he means something more than "a policy" and something less than "a constitutional requirement." I take it that he would distinguish between constitutive commitments and Ackermanian ethereal unwritten constitutional amendments adopted during so-called "constitutional moments." But (rousing myself reluctantly from a well-earned bout of lethargy) I am still wondering about the concept.
Cass writes, "These rights are "constitutive" in the sense that they help to create, or to constitute, a society's basic values." In a pluralist or diverse society (what Hayek called the "Great Society"), however, this brief description of the basic idea gives rise to some obvious questions: who exactly is constituted by which commitments? The claim is probably not that everyone
has made this commitment, or is constituted by it, but then who? A majority? A super majority? A critical mass, whether a majority or not? An intellectual elite? The proletariat?
Assuming one of these is specified, in what manner does the "commitment," whatever it may be, constitute
the relevant "us"? Are we somehow defined as individuals by a commitment external to us, or is the group to which be belong so defined, and "defined" in what sense?
As a matter of political science, how does one determine the existence of such a commitment? Opinion polls? Opinion polls over time? In other words, if anything of importance turns on the existence of such a commitment (another question I raise below), what reliable means is there for ascertaining its existence, breadth and content?
In short, how is the concept of "constitutive commitment" of an entire "society" less epistemically and metaphysically problematic than the much-maligned collective "intentions of the framers"?
The serious problems with discerning collective group intentions are well-known. I am wondering how constituent commitments are any easier either to identify or defend as real.
Then there is the question of how uncontested such a commitment needs to be to be counted as constitutive? Does it matter that a substantial minority dissent? Does it matter if they do not speak up much any more because, however substantial in numbers, they know they are in the minority and would be pummeled by the majority if they did? (Think of the 30-35% of the Massachusetts electorate who are, gasp, Republicans. Given the stability of the Democrat majority among voters, it is as though these 30-35% do not even exist. They certainly lack all representation at the federal level. Would their silent and/or ignored dissent detract from the "constitutive commitment" of the majority?)
Relatedly, how does one distinguish such a constitutive commitment among the general public in practice from one that "constitutes" the world of elite intellectual opinion makers, such as those in academia or in certain media outlets? Another reason why a member of even a majority of the general public who holds a different view from that of the opinion elite might remain silent is to avoid the very real suffering that can be meted out by this crowd with they are crossed. (Someone not as impervious to hostile criticism as, say, Hootie Johnson or Richard Epstein, which describes most people I think.)
What turns on the existence of a constitutive commitment? That those who share it vote or act in certain ways? That the minority who dissent remain silent or acquiesce in some way? That legislators vote in certain ways? Cass says that one advantage of FDR's constitutive commitment "strategy is that it avoids a role for federal judges" so I assume judicial review is more or less out of the picture, which distinguishes a constitutive commitment from an Ackermanian unwritten amendment. But perhaps I am misconstruing him here. I guess this is the eternal "so what?" faculty workshop question that I normally avoid asking.
Speaking of avoidance, is there any way of avoiding the sense that someone who invokes the concept of constitutive commitment is trying somehow to elevate, privilege or reify his or her own commitments or moral judgments--giving these judgments some higher status in the General Will of the Community, rather than that of a mere moral judgment, or even that of a mere consensus or a majority view?
All this does seem to relate to Cass's invocation of a Second Bill of Rights. When he first posted, I had thought to inquire as to the sense in which these claims are "rights" (but as I am trying to be on vacation for a while, I was content to let the matter pass). I was going to ask whether they are natural rights that belong to persons regardless of whether recognized by government--like the freedom of speech? Or are they positive rights that exist because they are adopted as part of the human laws--like the right to a jury trial. As Madison explained, the original ten amendments included both of these two types, though I have concluded that the Ninth Amendment refers only to natural rights. But it now appears that Cass means to claim Roosevelt's list, or some portion thereof, as "constitutive commitments," which are neither natural rights nor positive rights, thereby making a more precise account of this type of animal (or is it vegetable?) important to assessing the exact nature of his current argument.
: Rereading my post hours later, it seems far more sharp in tone than I intended. I had meant merely to put these questions that occurred to me about the concept of constitutive commitments but, strung together, they seem more harsh than inquisitive. Rather than edit the original text, however, I thought that I, like the first Congress, would add this rule of construction as an express amendment to the original text: The enumeration in this post of certain questions shall not be construed to disparage the post to which I was responding.
BLACKMARKETS IN EVERYTHING:
Tyler links to an article
(warning: as Tyler says, it's PG-13) about Japan's recent ban on the sale of soiled panties by underaged girls. Apparently some Japanese men - enough to create a thriving market - like to acquire the used panties for erotic purposes.
I let the story
about a guy auctioning off his virginity pass without comment because I didn't see a real policy lesson there. But in this case, there is a very serious point: banning a practice you find offensive rarely, if ever, means that the practice will disappear. On the contrary, the outcome is typically a more dangerous manifestation of the same practice. How did the law of unintended consequences play out in this case? Read on:
"Underage sellers aren't punished under the ordinance, so the girls are pretty composed about the whole thing," says Gomez Yamada, a writer specializing on teen girl topics. "The ordinance may have shut down the burusera stores, but it has only sparked a thriving trade in schoolgirls using mobile phone sites to conduct direct sales to customers without needing a middleman. They'll arrange to meet in some dark spot like a karaoke box or beside a building, then remove and hand over their panties on the spot in exchange for payment. What's more, they're charging 5,000 to 10,000 yen a pair, about five times what they would have got from a burusera shop. Some girls are even happy the ordinance has come into effect because it's done away with the burusera shops that were eating into their profits."
In other words, the market has gone underground. Girls used to sell their (under)wares to dealers with established businesses, whose primary interest was in making money. They never even had to meet the adult male buyers. But now, under the wise new policy, young girls are arranging clandestine meetings in dark venues with men who are turned on by girls' underpants. Does anyone else see a disaster in the making?
THE VIRTUE OF INACTIVITY:
As Prof. Sunstein's illuminating posts have reminded us, Franklin Delano Roosevelt was a man of purposeful action and sweeping vision. But is that necessarily a good thing? The lionization of presidents with ambitious policies and ideas biases our historical perspective, as Gene Healy reminds us in this excellent book review
. Interesting presidents give us better biographies, but boring presidents may give us better governance.
When I Say Everyone Can't, I Mean It!
One time back in elementary school, I heard a teacher talking about the logistics of an upcoming field trip, and she said something like this:
- Everyone can't fit on the bus.
I was confused. Did she seriously mean to say that not a single one of us could fit on the bus? How was that possible? Oh, wait—she must mean that not everyone could fit on the bus. But even when I'd figured out what she'd really meant, mentally attaching the intended meaning to the actual utterance was like trying to push two magnets together the wrong way.
This happened whenever I heard a sentence with a universal subject (e.g, everyone) and a negated main verb (e.g, can't). The resistance was so strong that for years, I thought the adage "All that glitters isn't gold" meant that by golly, if it glittered, it wasn't gold! Of course, I always thought it would make more real-world sense to say that not everything that glittered was gold, but hey, that wasn't how the saying went, and who was I to try to reinterpret it to suit my own taste?
In formal semantic terms, I was taking the negation to have scope only over the rest of the verb phrase, as illustrated in (2) with the Everyone can't fit example. I balked at allowing the negation to have scope over the whole sentence, as illustrated in (3):
- For every person x, x cannot fit on the bus.
(I.e., No one can fit on the bus.)
- It is not the case that everyone can fit on the bus.
(I.e., Not everyone can fit on the bus.)
Through the years, I (in company with many other people with strong opinions about English grammar) remained convinced that anyone who said "Everyone can't" and meant "Not everyone can" was making a mistake, plain and simple, despite the accumulating evidence that for many people, both scopings shown in (2) and (3) were OK.
As a linguist, though, I can't simply dismiss the scoping in (3) as a mistake. If I want to accurately describe how speakers are using a language, I have to respect the fact that Everyone can't is used by many speakers to mean "Not everyone can", and try to uncover the grammar rules that people are following that let them do this. This is not to say I have to like the construction; in writing-instructor mode, I can and do encourage writers to avoid it for the sake of clarity. But in linguist mode, to say that Everyone can't is wrong is just plain irresponsible.
Having recognized the two scopings of (1) as a fact of English, the task is now to write a grammar such that both scopings are generated. In fact, it turns out that it is easy to do this. First consider sentences like (4), similar to (1) except that the everyone is now an object instead of the subject.
- I can't talk to everyone.
(I.e., it is not the case that I can talk to everyone.)
This sentence has two scopings for the negation, shown in (5) and (6), both of which are completely OK for me, and for every other English speaker as far as I know. And once you specify definitions for your negation and quantifiers such as every so that you can generate a sentence like (4) with these two scopings, the two scopings for sentences like (1) are generated automatically.
- For every person x, I cannot talk to x.
(I.e., I can't talk to anyone.)
every (x, ~(I_can_talk_to(x)))
- It is not the case that I can talk to everyone.
~(every (x, I_can_talk_to(x)))
Semanticists have known this for years, and the usual thinking about why some people take issue with saying "Everyone can't" when they mean "Not everyone can" is that it's an issue of avoiding ambiguity: Not everyone can is more specific than Everyone can't, so why not just say that if it's what you mean, and reserve Everyone can't for situations when you mean "No one can"? (Or for that matter, avoid Everyone can't entirely, and say "No one can," if that's what you mean.) I'm pretty sure that this is the position that Larry Horn takes in his authoritative A Natural History of Negation, though I'd have to look it up again just to make sure.
This is the position I have defended in recent years in discussions with my parents and my brother Glen. However, Glen recently raised a telling point. In school we learned the usual prescriptive rules about not splitting infinitives, not starting sentences with because, and the other favorites, and in that way learned (or refused to learn) that we shouldn't do these things that we'd been doing for years. But when I was getting confused by Everyone can't fit on the bus, nobody was telling me that the teacher was making a mistake. My rejection of the "Not everyone can" meaning (and his, too) came from the gut, not from a usage manual. So wasn't it possible that in our dialect of English, it really and truly is ungrammatical for Everyone can't to mean "Not everyone can"?
I've been working on this problem for a week or two, now, and I can say that it is much easier to arrange things so that scopings (2), (3), (5), and (6) are generated than it is so that (2), (5), and (6) but not (3) are generated. It can be done, but at a minimum, it will require that there be two definitions for can't, one of which will give you the narrow-scope negations seen in (2) and (5), and the other of which will give you the wide-scope negation seen in (6), and not allow the one in (3). Probably we'll also have to say that subject-everyone has one syntactic category, while object-everyone has another one. Both these measures are ill-motivated (read: hacky) proposals, good for solving the current problem, but without any independent evidence in their favor.
What would be some independent evidence? If we're proposing that the single word can't (or other negation) is ambiguous between the two relevant meanings, then we could expect that there would be other languages out there in which the two meanings are hooked up to two different words. Likewise, if we're claiming that everyone is ambiguous between a subject version and an object version, perhaps there are other languages out there where there are actually two words for the two meanings, instead of one ambiguous one—not just the same word with different case markings, but actual different words. Of course, if such evidence is never found, it doesn't mean that our analysis is wrong, just that there is not much reason to put credence in it. But if evidence like this did turn up, wouldn't that be cool?
Several readers have noted that "Everyone can/can't fit on the bus" is ambiguous between a distributive reading (each person is/isn't individually able to fit) and a cumulative one (the group of people can/can't fit), which clouds the issue of the scope of the negation. Here is a better-chosen example: Everyone didn't go. I still can get only the "Nobody went" reading (though I can recognize when people other speakers intend the "Some didn't go" reading).
Some readers have also observed that they get only the "not everybody" reading of (4). I admit, this reading is the much-preferred one for me, though I seem to recall situations in which the other reading was appropriate and grammatical. But even if the "not everybody" (i.e. wide-scope negation) reading is the only one available here, it's still strangely different from the obligatory narrow-scope negation that I have to have for (1).
It's standard to distinguish between constitutional requirements and mere policies. An appropriation for Head Start is a policy, which can be changed however Congress wishes; by contrast, the principle of free speech overrides whatever Congress seeks to do. But there's something important, rarely unnoticed, and in between -- much firmer than mere policies, but falling short of constitutional requirements. These are constitutive commitments. (We're still talking, or at least not not talking, about FDR's Second Bill of Rights.)
Constitutive commitments have a special place in the sense that they're widely accepted and can't be eliminated without a fundamental change in national understandings. These rights are "constitutive" in the sense that they help to create, or to constitute, a society's basic values. They are also commitments, in the sense that they have a degree of stability over time. A violation would amount to a kind of breach - a violation of a trust.
Current examples include the right to some kind of social security program; the right not to be fired by a private employer because of your skin color or your sex; the right to protection through some kind of antitrust law. As with constitutional provisions, we disagree about what, specifically, these rights entail; but there isn't much national disagreement about the rights themselves. (At least not at the moment.)
We could learn a lot about a nation's history if we explored what falls in the category of constitutional rights, constitutive commitments, and mere policies -- and even more if we identified migrations over time. Maybe some of the commitments just mentioned will turn into mere policies. Sometimes policies are rapidly converted into constitutive commitments (consider the 1964 Civil Rights Act). Sometimes constitutive commitments end up getting constitutional status (the right to sexual privacy is, to some extent, an example, with the line of cases from Griswold v. Connecticut to Lawrence v. Texas).
Back to FDR's Second Bill of Rights: He wasn't proposing a formal constitutional change; he didn't want to alter a word of the founding document. He was proposing to identify a set of constitutive commitments. One possible advantage of that strategy is that it avoids a role for federal judges; another possible advantage is that it allows a lot of democratic debate, over time, about what the constitutive commitments specifically entail.
Johnson and Reagan also tried to redefine the nation's constitutive commitments, but FDR was much more ambitious. He didn't quite succeed in turning the Second Bill of Rights into constitutive commitments; but if you go over the list, you'll see that he didn't exactly fail.
Sunday Song Lyric Update:
I must apologize to the song lyric fans that nothing appeared this past Sunday. I have been traveling and I had thought that I had managed to successfully publish a song lyric for a later date. Alas, I was mistaken, and the song lyric (by Nina Simone) disappeared into the ether. I'm still on the road, but expect two song lyrics to appear this coming Sunday.
Wednesday, June 23, 2004
There've been a number of posts about acronyms recently on some of the blogs I read. One was from me, one from Glen, and one from the guy at Semantic Compositions. This last one reminded me of a letter to the editor I wrote some 15 years ago. The post is about recursive acronyms, the cited example being GNU, which expands to "GNU's not Unix." One of the letters in a recursive acronym (theoretically any one of them, but always the first one in the examples I've seen) stands for the acronym itself, leading to an infinite loop when one tries to expand out the acronym. As I read the post, I thought back to my freshman year at the University of Texas at Austin... ah, yes... I remember it as if it were a segment on Letterman...
I was reading in the campus newspaper about a newly formed student organization that called itself QUEERS. That's QUEERS, not Queers. It was an acronym. I was curious what the acronym could stand for, since there aren't that many words beginning with Q, and probably even fewer that would be relevant. What could it be? Quest? Quintessential? I read on. QUEERS, as it turned out, stood for "Queers United Envisioning an Egalitarian Restructuring of Society."
At first I was just disappointed. That was it? The elusive first word of QUEERS was just queers? That wasn't very clever at all. Then I began to be alarmed. You see, I had just read Douglas Hofstadter's Goedel, Escher, Bach, which had introduced me to the concept of a recursive acronym in the name of a genie in one chapter-opening vignette. The genie's name was GOD, which stood for "GOD Over Djinn." When asked to grant a meta-wish for an infinite number of ordinary wishes to be granted, GOD had to initiate an infinite recursion to all the GODs above him to get the required permission for such a wish. This QUEERS organization was flirting dangerously with infinite recursion! The only thing that saved them was the wise decision to have Q stand for Queers, and not for QUEERS.
I wrote a letter to that effect, and was pleasantly surprised to see it printed a week or so later. Returning to the present, though, I wonder how many other almost-recursive acronyms like QUEERS are out there. I found one on this page, though it was listed as a true recursive acronym: Cygnus, standing for "Cygnus, Your GNU Support." This acronym might escape from infinite recursion since Cygnus has a meaning other than the company with this name--specifically, a constellation. However, since Cygnus the company probably doesn't mean for the C to stand for a constellation, I guess this might be a truly recursive acronym after all. (I will give Cygnus credit for one thing: They are not guilty of acronym-stacking! More on that later.) So at this point, QUEERS stands alone in my list of almost-recursive acronyms. Additions to the list are welcomed.
Regarding Eugene's question as to why some consider it impolite to called Jews "Jews" instead of "Jewish people," I can contribute a little history. By the 19th century, the word "Jew" was thought by enlightened folks to have derogatory connotations. The leadership of the Reform movement led an effort to abandon the word "Jew" in favor of "Hebrews" or "Israelites," I assume because they thought those words had positive Biblical vibes. Indeed, the confederation of American Reform synagogues is still known as the Union of American Hebrew Congregations. If I'm not mistaken, the leading American Jewish periodical before the wave of (decidedly non-Reform) Eastern European migration in the late 19th century was "The American Israelite." And if you read 19th century publications, friends of the Jews would often refer to them as "Israelites," "Hebrews," "Members of the Mosaic Faith," and other euphemisms that avoid the nasty-sounding word "Jew."
Neither Hebrew nor Israelite ever caught on, but discomfort with the word "Jew" remains. And indeed, anti-Semitic discourse seems to always use the word "Jew," not "Jewish people," as in "dirty Jew!"; or "the Jews control (the media, Hollywood, the Bush Administration's foreign policy);" or "Jews or so clannish." Indeed, I'm told that before I arrived at GMU Law School, one professor--who left before I started at GMU--angrily referred to one of my colleagues as "you little Jew." He disingenuously defended himself from charges of anti-Semitism by noting that my colleague is both diminunitive and Jewish.
Update:Several readers have informed me that UAHC recently changed it's name to Union for Reform Judaism, thus finally formally acknowledging the death of the original dream of uniting all American congregations under one umbrella.
MINIMUM WAGE HERESY:
There are plenty of good arguments against the minimum wage, but I'm not persuaded by reductio
arguments like this one
Kerry proposes that the legislated minimum wage reach $7.00 per hour in 2007. Why wait until then? If government can increase workers' earnings by declaring in a statute that no worker shall be paid less than $7.00 per hour, why delay this move to greater prosperity?
Indeed, why an hourly minimum wage of only $7.00? Why not $17.00 per hour? Or $70 per hour?
The argument works well enough if you've already got a competitive model of labor markets in your head. But clearly enough, the proponents of minimum wage increases are not
working with that model. The minimum-wage advocates who have thought much about it (of course, many haven't) usually have in mind some kind of monopsony
model - that is, they assume a market in which employers have some degree of monopoly buying power. Under monopsony, wages can in theory be increased within a certain range with no reduction (and maybe even an increase) in employment. But if the wage rises above a critical point, then disemployment kicks in. Thus, a believer in the monopsony model can consistently favor small increases in the minimum wage while still opposing large ones.
Assuming the truth of the monopsony model for argument's sake, the question is whether you trust the government's ability (and incentive) to "fine tune" the minimum wage so that it always falls within the no-disemployment range. I don't, in part because the size of that range differs across markets and regions, and in part because the range is probably small in any case (as competition increases, the size of the no-disemployment range shrinks). And how likely is it that John Kerry's nice round $7 figure is based on anything other than a raw political calculation?
AS TYLER WOULD SAY, MARKETS IN EVERYTHING:
My sister Ellen alerted me to the case of Dave Vardy, who is trying to auction off what many guys would happily give away for free: his virginity
. The starting bid is listed at over 6000 British pounds, though I can't tell whether that's his reserve price or someone has actually bid that much. I suppose I could try to squeeze in a lesson about auction theory here, but I think I'll let this one pass without further comment.
What's with those Jewish people?
Why do some people think that it's more polite to say "Jewish people" than "Jews"? I've heard some people say that "Jews" is somehow considered rude, and "Jewish people" is better, but I just don't see why.
Does anyone know the story here? People don't generally say "black people," "Catholic people," or "female people." Why should they call us "Jewish people" rather than just "Jews"? I don't quite get it.
(I'm not saying that "Jewish people" is wrong -- if you want to say that, it's fine with me, though it will sound affected to me and people who think like me, at least until we're persuaded that "Jews" is somehow bad.)
Lochner Article Reprints:
I will soon be sending out reprints of my 2003 trilogy of articles about Lochner v. New York, its origins, and consequences: (1) Lochner's Legacy's Legacy, from the Texas Law Review; (2) Lochner Era Revisionism, Revised: Lochner and the Origins of Fundamental Rights Constitutionalism, from the Georgetown Law Journal; and (3) Lochner's Feminist Legacy, from the Michigan Law Review (see also: Lochner v. New York: A Barrier to the Regulatory State in Michael Dorf, ed., Constitutional Law Stories (Foundation Press 2004).
If you are someone with a scholarly interest in Lochner Era jurisprudence, and are not on my reprint list, shoot me an email and I'll send you these articles (which you can also read in their almost-final versions at SSRN.com).
Ok, it's not quite a haiku. But as sentences in Supreme Court opinions go, it's not all that far from that: "Property, a creation of law, does not arise from value, although exchangeable -- a matter of fact." That's from Holmes' 1918 opinion in INS v. AP. (By the way, we're still talking about FDR and his Second Bill of Rights. Also by the way, thanks to Eugene for having me, and also to Randy Barnett for his kind words; how well I remember our fun and great discussions when we lived, more or less, next door to one another here at Chicago.)
What Holmes is saying here is that even though property is exchangeable, it doesn't arise from value; it's a creation of law. And that's simply a matter of fact. With these sixteen words, Holmes captured much of the legal realist critique of laissez-faire -- and a key part of legal thinking between 1890 and 1930. A system of free markets isn't law-free; it depends on law. Property rights, as we enjoy and live them, are a creation of law; they don't predate law.
From his first national campaign, Roosevelt made the same point, though less elegantly than Holmes. In 1932, he emphasized "that the exercise of . . . property rights might so interfere with the rights of the individual that the government, without whose assistance the property rights could not exist, must intervene, not to destroy individualism but to protect it." It's a mouthful, but here is the heart of his proposal for a Second Bill of Rights, twelve years before he made the formal proposal. In that same speech Roosevelt called for a "redefinition of rights," including an "economic declaration of rights," which would recognize that "every man has a right to live."
Roosevelt insisted that no one is really opposed to "government intervention." Those who complain about "government" depend on it every day of every year. Their "property rights could not exist" without its assistance (which costs a lot of money). And he believed that further "intervention," designed to protect decent opportunities (recall the right to education and the right to be free from monopoly) and minimal security, could be necessary to protect not equality but "individualism."
What Roosevelt did was to unsettle the distinction between "negative rights" and "positive rights." He insisted that the right to private property and freedom of contract were, in practical terms, created by government. This isn't at all a criticism of property rights or freedom of contract; Roosevelt strongly believed in both of these. (He despised socialism.) But he thought that any judgment about rights should be based on a sense of what would make human lives go well. In his words, "The thing that matters in any industrial system is what it does actually to human beings . . ." (This from our wheelchair-bound president, Reagan's stylistic role model, who liked to end meetings by saying, "I'm sorry, I have to run.")
Now it might be possible to reject some or all of the Second Bill of Rights on various grounds. (I'll be getting to that.) But it isn't sensible to reject the Second Bill on the ground that rights, to qualify as such, call for government's abstinence rather than government's presence. Holmes' haiku helps to explain why.
Higgs Joins Liberty & Power:
The blog Liberty & Power
keeps stocking up on libertarian intellectual talent. Now it has added to its impressive and provocative roster Robert Higgs, the author of Crisis and Leviathan
and the forthcoming book, AGAINST LEVIATHAN: Government Power and a Free Society
. No scholar has thought more seriously about the historical relationship between liberty and power than Bob.
THE RULES OF ABSTRACTION:
In the course of my research, much of which relates to selection of legal and ethical decision rules, I've often pondered the meaning of the word "rule" itself. What does it actually mean to have a rule, or to be guided by one? The more I think about it, the more I think that a rule means a prescription for action that relies on an intermediate degree of abstraction. When rules become either too abstract or too situation-specific, they begin to lose their rule-like character.
Take the rules of etiquette. One rule is to say "Thank you" when someone gives you a gift or performs a service for you; another is to say "Please" when you wish someone to do something for you; and so on. Now, imagine if we replaced these rules with a single, highly abstract directive to just "Be polite." That directive would not provide much useful guidance. Lacking more information, the decision-maker would have to decide for each and every interaction what would be a "polite" thing to do.
On the other hand, what if we had a different prescription for every possible interaction? One directive for when someone passes the butter, another for when someone passes the salt, another for when someone holds open a door, another when someone holds the elevator, etc. The decision-maker's problem would be very similar to that which he encountered under the "Be polite" rule. Any time he encountered a novel situation - and arguably, every situation is novel in at least some infinitesimal degree - he would have to decide the correct action, without much assurance that it's correct. (Would it be okay for him to use the passing-the-butter response the first time someone passed him I-Can't-Believe-It's-Not-Butter?)
The actual rules of etiquette have an intermediate degree of abstraction, neither so broad as to include all situations, nor so narrow as to differ for each and every specific situation. They identify abstractly-defined types or kinds of situations.
The same analysis applies to legal rules. A rule of tort liability that said "Do the right thing" or "Be careful" would not be terribly helpful, unless the decision-maker had some idea of what other people think is right in more narrowly defined types of situation. But if the categories were too narrow, depending too much on the particular characteristics of the particular situation, the decision-maker's dilemma would be the same. A more useful rule of tort liability tells the decision-maker how to act in situations with an intermediate degree of abstraction; e.g., "Always yield to cars already on the freeway when merging."
I surmise that my definition of rule-ness could be applied to the rules of virtually every area of human interaction: the rules of language, the rules of ethics, the rules of games, etc.
One implication of my position is that many so-called rules of law, such as the ubiquitous "balancing rules" and "reasonable man" tests, are not very rule-like at all. They are standards, which have a higher degree of abstraction than rules. Standards can be useful in choosing rules, and in dealing with novel situations that arise in the "cracks" between rules. But they can also create considerable uncertainty if allowed to substitute for, rather than supplement, rules with an intermediate degree of abstraction. A similar problem afflicts case-by-case decision-making, which errs on the side of too little abstraction instead of too much.
I realize that I'm shifting between a descriptive notion of rules (what we mean by the term "rule") and a normative notion (what rules should be like). But I happen to think the entities we call rules usually do have this characteristic of intermediate abstraction, despite some exceptions. If we sometimes apply the word rule to entities that are highly abstract or highly specific, it is because abstraction and specificity are separated by a gradient, not a sharp line.
Too Old To Rock and Roll, Too Young To Die:
The Rockin' Book Tour
for Restoring the Lost Constitution: The Presumption of Liberty
has finally drawn to a close for the summer. I want to thank all the Federalist Society Chapters at the schools listed below for inviting me and for their gracious hospitality. Meeting the local officers at so many schools was inspiring for me, as was the engagement and enthusiasm of the audiences. Though I am temporarily quite weary, having flown over 60,000 miles and 70 flight segments in 5 months, I will be forever grateful. Even now I am looking forward, after a nice rest, to visiting other schools who have already contacted me about speaking during next school year.
In addition to the student chapters at the following schools, I thank the Lawyers Division Federalist Society chapters in Seattle, Portland, Phoenix, Los Angeles and Santa Monica, the Cascade Institute in Portland, the Discussion Club in St. Louis, and the Goldwater Institute in Phoenix for also sponsoring talks. And of course I thank the Cato Institute for helping promote the book and sponsoring the book forum with Walter Dellinger and Judge David Sentelle that you can view on the web here
in Real Video or listen to here
Georgetown Law Center
George Mason University School of Law
George Washington University School of Law
Catholic University School of Law
University of Virginia School of Law
Washington & Lee School of Law
William and Mary, Marshall-Wythe College of Law
Yale Law School
University of Pennsylvania School of Law
Temple University School of Law
Vanderbilt University School of Law
University of Houston School of Law
South Texas College of Law
St. Mary's University School of Law
University of Minnesota School of Law
William Mitchell College of Law
St. Thomas University School of Law
John Marshall Law School (Chicago)
New York University School of Law
Columbia University School of Law
Cornell University School of Law
Boston University School of Law
University of Nebraska School of Law
University of Michigan School of Law
Indiana University (Bloomington) School of Law
St Louis University School of Law
UC Hastings College of Law
Stanford Law School
University of Santa Clara School of Law
Chapman University School of Law
UCLA School of Law
Pepperdine University School of Law
USC Law Center
Loyola-Marymount University School of Law
University of San Diego School of Law
University of Oregon School of Law
University of Washington School of Law
Seattle University School of Law
Notre Dame Law School
Arizona State University College of Law
University of Georgia School of Law
Emory Law School
Georgia State School of Law
Florida State University College of Law
[My sincerest apologies for any inadvertent omissions.]
Tuesday, June 22, 2004
Eugene identified guest-blogger Cass Sunstein as "a constitutional law scholar at the University of Chicago Law School," but he's also Professor of Political Science. That means that, for as long as Cass is here with us, Chicago Political Science will tie UCLA Law and George Mason Law at two co-conspirators each. (This has been true before, whenever Dan Drezner guest-blogged.)
Welcome to the Conspiracy Cass:
The year before I started teaching I took a leave of absence from the Cook County States Attorney's Office to be a Research Fellow at the University of Chicago Law School. Cass Sunstein was in the office next store in his very first year of teaching and we spent quality time together that year. Now Cass is a "Visiting Fellow" of the Volokh Conspiracy. Welcome to the office next door, Cass!
Update:Language Log noticed and commented upon the mistake in the above--and it was inadvertent, not a joke. (More of these bizarre swaps seem to occur as you get older, or at least as I do.)
Glen Whitman and Neal Whitman:
I'm also pleased to report that Glen Whitman and Neal Whitman, who usually blog at Agoraphilia, will be guest-blogging here from Wednesday to Friday.
Glen is a good friend of mine, a solid libertarian (much more so than I am), and an economics professor at Cal State Northridge. Neal is a Ph.D. in linguistics, and, as it happens, Glen's brother (what a coincidence!). It's a great pleasure to have them both here.
To reach them, e-mail Glen at glen.whitman at csun.edu, or Neal at nealwhitman at yahoo.com; their addresses are also available at Agoraphilia.
I'm delighted to say that Cass Sunstein will be guest-blogging here for the next few days, posting occasionally about his new book, The Second Bill of Rights.
Cass, a constitutional law scholar at the University of Chicago Law School, is one of the most influential law professors in the country (and is in fact the full-time U.S. law professor whose work has been cited the most times in scholarly legal periodicals). He has written over a dozen books and a vast number of articles on a wide range of topics; see here for just a small subset, or here for the whole daunting list. It's a real privilege to have him participating here, even if only for a short while.
Those who know Cass's work know that he and I disagree on many things. I suspect that many of our readers, especially conservatives and libertarians, will likewise disagree with the views he expresses in his new book and in the posts that are based on the book. Nonetheless, I think this is a great opportunity for the blog's readers to see what one of the most thoughtful and accomplished scholars in another camp is thinking and writing about.
Cass's e-mail address is csunstei at uchicago.edu; if you have any comments on his posts, please e-mail them to him. Please note, though, that he may not have the time to reply to all the comments (which is to say he adheres to the same e-mail policy that we do).
The Greatest Generation
In the last weeks, there has been a ton of attention to the so-called Greatest Generation and to the spirit of those who fought World War II - and to some apparent parallels between that generation and our own. During the opening of the new memorial in Washington, President Bush drew particular attention to that generation and especially to Franklin Delano Roosevelt, saying, "Across the years, we still know his voice."
Do we really?
I've spent much of the last three years with FDR and his amazing voice, hearing it on tapes (in the car, boring my teenage daughter) and on the VCR, and pouring over countless speeches and papers from the FDR library. (And yes, a book, called The Second Bill of Rights, will be published soon from these efforts.)
Here's my conclusion: In all the celebration, we've lost sight of what FDR and his generation were all about. We've decorated them in a gauzy, complacent nostalgia that has betrayed their pragmatic, forward-looking spirit. If we want to understand our own history, we should be listening, a bit, to what FDR actually had to say.
On January 11, 1944, the United States was involved in its longest conflict since the Civil War. The war effort was going well. Victory was no longer in serious doubt. The real question was the nature of the peace. At noon, Roosevelt sent the text of his most ambitious State of the Union address to Congress. Ill with a cold, Roosevelt did not make the customary trip to Capitol Hill to appear in person. Instead he spoke to the nation via radio - the first and only time a State of the Union address was also a Fireside Chat.
Roosevelt's speech wasn't elegant. It was messy, sprawling, unruly, a bit of a pastiche, upbeat, and not at all literary. It was the opposite of Lincoln's tight, poetic, elegiac Gettysburg Address. But because of what it said, it has a strong claim to being the greatest speech of the twentieth century.
Roosevelt began by emphasizing that the "supreme objective for the future" -- the objective for all nations -- was captured "in one word: Security." Roosevelt argued that the term "means not only physical security which provides safety from attacks by aggressors," but includes as well "economic security, social security, moral security." Roosevelt insisted that "essential to peace is a decent standard of living for all individual men and women and children in all nations. Freedom from fear is eternally linked with freedom from want."
Roosevelt looked back, and not entirely approvingly, to the framing of the Constitution. At its inception, the nation had grown "under the protection of certain inalienable political rights—among them the right of free speech, free press, free worship, trial by jury, freedom from unreasonable searches and seizures."
But over time, these rights had proved inadequate. Unlike the Constitution's framers, "we have come to a clear realization of the fact that true individual freedom cannot exist without economic security and independence." As Roosevelt saw it, "necessitous men are not free men," not least because those who are hungry and jobless "are the stuff out of which dictatorships are made." Recalling the New Deal, he cut to the chase: The nation had "accepted, so to speak, a second Bill of Rights under which a new basis of security and prosperity can be established for all—regardless of station, race, or creed."
Then he listed the relevant rights:
The right to a useful and remunerative job in the industries or shops or farms or mines of the Nation;
The right to earn enough to provide adequate food and clothing and recreation;
The right of every farmer to raise and sell his products at a return which will give him and his family a decent living;
The right of every businessman, large and small, to trade in an atmosphere of freedom from unfair competition and domination by monopolies at home or abroad;
The right of every family to a decent home;
The right to adequate medical care and the opportunity to achieve and enjoy good health;
The right to adequate protection from the economic fears of old age, sickness, accident, and unemployment;
The right to a good education.
Having catalogued these eight rights, Roosevelt said that "we must be prepared to move forward, in the implementation of these rights." Roosevelt asked "the Congress to explore the means for implementing this economic bill of rights—for it is definitely the responsibility of the Congress to do so."
Let's put to one side, for the moment, whether Roosevelt was right to call for a Second Bill of Rights; let's even acknowledge that it's unclear what he meant by it. (I'll have a bit more to say about that.) For now, the central point is that we've missed a huge piece of our own history. The leader of the Greatest Generation had a distinctive project, running directly from the New Deal to the war on Fascism -- a project that he believed to be radically incomplete. We don't honor him, and we don't honor those who elected him, if we forget what that project was all about.
Radley Balko over at Liberty & Power
loves HBO's Deadwood
as much as I do. Like him, I thought the season finale was superb. Here is a portion of Radley's post:
The show bustles with themes of rugged individualism, and explores the troubles and travails of a small community emerging from Hobbesian anarchy into a loose-knit system of law and property. I love the scene from a couple of weeks ago where the town's newly-appointed fire inspector Charlie Utter — who was appointed only to give the town some credence in the eyes of Congress — gets into a squabble with saloon-owner Nutall over the proximity of his stove pipe to the wall.
The dialogue is wonderful. In addition to the colorful profanity (how many different variations on "cocksucker" are there, anyway?), I love the NYPD Blue approach to character interaction, conversations sprayed with rough but real-world transitions — lots of "anyways," and "like I was sayins."
When the Doc Cochran is examining one of Swarengen's prostitutes he inquires about her menstruation cycle with, "So where are ya' in yer' moons?"
I think the season's best line came from Swarengen himself, though, when talking to Doc Cochran: "Announcin' your plans is a good way to hear God laugh."
So good show, HBO. Again.
I second the comparison between Deadwood and NYPD. It is the dialogue of that show that enables me to enjoy it as it captures how people in the criminal justice system really talk. No one knows for sure how settlers in a far western mining camp would really talk, but Deadwood has the ring of authenticity.
I rate this as simply the best show on TV, including the Sopranos. Deadwood combines fascinating character studies with intelligent writing and an absorbing plot. It is a "realistic" western that is not cynical, just raw. There are genuine (reluctant) heroes and real villains, and the most interesting character in between who is based on an historical figure from Deadwood history named Al Swarengen. The Indians (who Swarengen calls "dirt worshippers") are demonized by most, while the Chinese are dehumanized (but curiously not by Swarengen) by some. Sex is commodified--90% of women living in Deadwood in 1876 were prostitutes--but one subplot builds with real sexual tension born of denial. You can feel the town growing as the season progresses, and apparently the set was being built and expanded as the series was being filmed. Most episodes pick up almost exactly where the previous one ends. [Warning: There is an unprecedented amount of cursing even for HBO and, while this does not bother me, it may turn off others.]
I disagree with Radley's choice of favorite lines from the finale. I liked his choice just fine, but mine was "Get the fuck out of here, doc. I'm working on my deployments and flanking maneuvers." Then again, maybe it is when the Doctor's exclaims that he is discussing "a human being in his last extremity, not a bag of shit," to which Swarengen replies, "A human being in his last extremity is
a bag of shit." This is a show with 5-10 "best lines" per episode.
When Deadwood reruns on HBO, as it inevitably will, start watching it from the beginning. Then enjoy the new season next year. You will thank me for the recommendation. This is series television at its best--novelistic like the best of the BBC and now HBO, the network that is responsible for a new golden age of series television.
Speaking of which, HBO is just starting to rerun the second season of The Wire
to build up to the season premiere of the third season. This is the best, most true to life, cop show in the history of television. The dialogue is the equal of Deadwood, and often better than the Sopranos. While I found the first season better than the second, this was probably because I had never seen a cop show so reminiscent of my experience as a prosecutor (with one glaring exception in the penultimate episode of Season 2). The show shows dirty cops, corrupt and political bureaucracy, flawed heroes, and the criminal's perspective. No character is given a pass, but you can understand where everyone is coming from. Another amazing series that is well worth the commitment of time.
Update: A reader writes:
Everything you say about Deadwood is right.
Al Swarengen is probably the most interesting character on TV, ever. At first, he comes across as just evil. But it turns out that he just has a warped morality. He's not entirely self-serving. He has a sense of responsibility for the people who rely on him to generate revenue, like any good capitalist. His methods fit what was probably the requirements of the time and place for a hard-nosed businessman. He's flawed, In fact, if you believe the characterization of Ken Lay, it would probably be a disservice to Swarengen to call him the Ken Lay of his time. Put Swarengen in charge of Enron and you have one rich son of a bitch who cuts corners, but not at the expense of his long-term best interests. Or at least, that's how I read him.
It's been a long time since I reached the end of the season on a show and felt a real pang of regret that it was over. I'll probably watch the re-runs, which I never do for any series.
Update:Gene Healy adds his comments here.
Related Posts (on one page):
- Dirty War:
Poor Warren Harding:
The Federalist Society/Wall Street Journal book ranking the presidents via a survey of liberal and conservative scholars is doing quite well on Amazon.com. Too bad, then, that this survey, while far more balanced than most, ranks Warren Harding at the bottom. Forever tarred by the Teapot Dome scandal (though it's not clear he was complicit in it), Harding's great accomplishments have been overlooked: (1) pardoning Eugene V. Debs and ending the harassment of leftists and pacifists from the Wilson years; (2) after the horribly racist Wilson years, undoing Wilson's segregation of federal offices, supporting anti-lynching legislation, and delivering a bold speech (in Birmingham!)in favor of equality for African Americans; (3) reducing government spending and taxes back to reasonable, pre-Great War levels; (4) repealing various wartime economic controls--the latter two policies were responsible for the boom of the 1920s; (5) appointing excellent Supreme Court Justices, notably Taft and Sutherland--the Justices appointed by Harding ushered in one of the most creative periods in Supreme Court jurisprudence (a time in which, among other things, the Court developed the precedents that underly modern civil liberties jurisprudence--see, especially Meyer v. Nebraska--and issued the most progressive opinion on women's rights--Adkins v. Children's Hospital, holding that the 19th Amendment granted women full equal citizenship so that they may not be denied rights given to men, namely liberty of contract in Adkins--issued by the Court until the 1970s).
Other than Teapot Dome, the only major flaw I can think of in Harding's record is that he signed the first major law limiting immigration from Europe in 1921. But no one's perfect, and this mistake, combined with a scandal that pales next to, say, Iran-Contra, hardly should assign Harding to the bottom of the presidential heap. And it's especially hard to fathom Harding's low ranking when the execrable and incompetent Woodrow Wilson consistently ranks as a "near-great" president.
Moore and the New York Times:
Dan Gifford passes this along; it was apparently transcribed by a listener, purportedly from the June 18, 2004 Late Show with David Letterman. Don't know if it's true, and I won't assume that it is. (UPDATE, 6/28/04: It is indeed apparently not true. Rats!) But it's at least a good joke:
David Letterman: How do we know what's in your film [Fahrenheit 9/11] is true?
Michael Moore: Because I got most of my information from The New York Times.
Audience: Wild laughter.
Letterman: Strains to repress laughing
Moore: What's so funny?
(I should mention that Moore was indeed on the June 18 show, but the excerpt available on the highlights site is only of Moore's memories of his Oscar night).
Confirmation of Judge Calabresi's comments?
As I mentioned in my posts yesterday (here and here), my criticisms of Judge Calabresi's remarks were premised on the remarks' having been accurately quoted and paraphrased. All I have to go on is the New York Sun article, and there's always the risk that the article is in error.
It seems that tne source is implicitly confirming the accuracy of at least Calabresi's Hitler/Mussolini quote: DownWithBush, a blog that harshly -- though in my view unpersuasively -- criticizes the criticisms of Calabresi. (I think my original post adequately rebuts those arguments.) But nothing in the blog post, written by someone who was in the audience during Calabresi's speech, suggests that Calabresi was quoted incorrectly. So I'm somewhat more confident than I was yesterday that Calabresi's comments were as described -- and I remain quite confident that if they were as described, they are unsound and improper.
However, I stress again: If anyone who was in the audience has a different recollection of the comments (or the same recollection) I'd very much appreciate hearing about it. A transcript or an audio recording of the panel would be better still.
Archie Leach on Cary Grant:
In the passage sent to Eugene by a reader, Mark Kingwell misquotes Cary Grant as saying: "Every man wants to look like
Cary Grant. I want to look like Cary Grant." Grant is actually reputed to have said, "Everyone wants to be
Cary Grant. Even I want to be Cary Grant." I think the actual quote properly downplay's Grant's looks--the principal focus of Eugene's interesting thread on why men do not pay more attention to their appearance--in favor of the personna he created on and off the screen. This personna includes not only impeccable tailoring and grooming, a handsome visage and a uniquely suave if completely unidentifiable accent.
It also includes a warmth, charm, wit, consideration, and in many films a willingness to laugh at himself and his carefully cultivated image and to put himself in embarrassing situations.
This last was facilitated by his early vaudeville training that made him wonderfully acrobatic.
(from which the previous picture is taken), Audrey Hepburn's character Regina Lampert asks Grant's character, Peter Joshua: "Do you know what's wrong with you?" "No, what?" replied Grant. "Nothing!" said Hepburn. Little wonder Archie Leach never stopped wanting to be Cary Grant.
Cuyahoga River fire:
Today is the 35th anniversary of the infamous Cuyahoga River fire in Cleveland -- one of the seminal events in the history of environmental law. Oil and debris apparently accumulated under a railroad trestle and briefly caught fire. News of the "burning river" spread across the nation, helping to galvanize the emergent environmental movement.
After all, if a river could catch fire, environmental problems must have gotten really bad. Indeed, many argue the river fire sparked the eventual passage of the Clean Water Act.
The story of the Cuyahoga River fire is a canonical tale, but my friend Jonathan Adler -- a law professor at Cleveland's Case Western Reserve University -- argues in the Fordham Environmental Law Review that much of the story is a fable.
While many point to the fire as evidence of ever-worsening environmental conditions, Jonathan argues that water quality had already begun to improve before the fire. The Cuyahoga River was heavily polluted, to be sure, but it was starting to turn the corner.
From the 1880s to 1950s, fires on industrial rivers and harbors were rather common, and rarely elicited comment. Much pollution was accepted as the inevitable and unavoidable cost of industrialization. As the nation became richer, attitudes changed, and cleanup efforts began, even before the adoption of federal laws. River fires were rare by the 1960s, largely due to state and local cleanup efforts.
The 1969 fire attracted national attention more because of increased environmental consciousness than because environmental quality was steadily getting worse. It's a complex story, but quite interesting. I'm not an environmental expert, but Jonathan is, and I've found his work to be trustworthy and eminently readable.
One item of note from Jonathan's article. The fire received national attention when highlighted in Time, but the picture of a river aflame accompanying the article wasn't of the 1969 fire at all -- it was of a much worse 1952 fire. There are apparently no pictures of the 1969 fire itself because the fire was out before any photographers arrived. So the image many remember of the river on fire isn't of the famous fire at all.
Blacks and the New Deal:
Historian David Beito has been blogging about Blacks' reaction to the New Deal's National Industrial Recovery Act. Editorials in the nation's leading Black newspaper, the Chicago Defender, were generally hostile to the Act, which exacerbated unemployment among African Americans. When I did my own research on the Act, I looked at more "official" African American sources, such as the NAACP's journal The Crisis. Kudos to David for uncovering another enlightening source.
Media Misreporting on PA's Role in Terrorism:
Almost every reference to the Al Aksa Martyrs' Brigade in the mainstream media has been accompanied by caveats that the while the group is an offshoot of Fatah, the main Palestinian party run by Arafat, the group acts autonomously and neither Arafat nor the PA have any control over it. However, PA prime minister Qurei recently had this to say about Al Aksa, which has far and away been responsible for the most terrorist attacks on Israeli civilians: "We have clearly declared that the Aksa Martyrs Brigades are part of Fatah." "We are committed to them and Fatah bears full responsibility for the group."
I acknowledge that it's possible that Qurei is claiming responsibility for the group despite a lack of Fatah control over it for political reasons. But let's at least acknowledge that the presumption is now with those who have been saying all along that Al Aksa is part and parcel of Fatah, and has been used by Arafat as a terrorist weapon against Israel, one for which he could claim plausible deniability, but for which he bears ultimate responsibility.
Interesting Ha'aretz Interview with Dov Zakheim,
who was recently Undersecretary of Defense. Among his assertions in the interview: (1) Israel is squandering an opportunity to withdraw from the territories while it has a very friendly president in office; (2) the situation in Iraq is far better than most people think; and (3)claims that a cabal of Jewish neoconservatives are running American foreign policy on behalf of Israel are pure anti-Semitism.
Monday, June 21, 2004
P.G. Wodehouse and Hiibel:
Professor Bainbridge has a puzzle about the two -- you wouldn't think they'd go together, but apparently they do. I read all the Jeeves books, and many other Wodehouse books, and liked them very much; but I confess I have no idea what the answer is.
A google search for Gay Eugene, it turns out, yields the Volokh Conspiracy as the #1 result. I learned this from a friend of Sasha's (who goes by Inspired Turnip), who did the search because she was looking for "a gay business directory" "to find a man with a flair for interior decorating to help me choose window treatments for my home here in Eugene, Oregon." Hey, we'll take traffic however we can get it.
In response to the Men and Sexy thread, a reader passed along this amusing excerpt from Mark Kingwell's Catch
(I express no opinion on the merits of the quote; I just think it's well put):
Manly elegance is obviously a hard standard to reach, and en route we are likely to go astray. Cary Grant once said, "Every man wants to look like Cary Grant. I want to look like Cary Grant." The cinematic version of Grant, the Grant icon, at once combines the virtues of style and manliness, of suavity and physical courage. . . . There are women of my acquaintance, maybe of yours, who claim to find Grant too "femmy," but that bespeaks only a desire for rough trade that they should perhaps examine more closely. If Cary Grant isn't man enough for you, there's something wrong with your picture of manhood.
Kerryisms strikes again:
Slate's increasingly surreal Kerryisms column has this entry today:
Question: What kind of Democrat are you?
[This is the edited version, Slate's view of what Kerry presumably should have said:] Kerry: A thinking Democrat. You can call me an old-fashioned New Deal Democrat. I'm not going to break faith on Social Security. I'm not going to abandon people who are struggling to earn a decent wage.
—Time, Feb. 9, 2004
[This is what Kerry actually said:] A thinking Democrat. You can call me an old-fashioned New Deal Democrat on X or Y. I'm not going to break faith on Social Security. I'm not going to abandon people who are struggling to earn a decent wage. But call me a New Democrat when it comes to creating jobs and being entrepreneurial and understanding the bottom line of business.
The Slate version basically cuts out (as a supposed "caveat" or "curlicue") Kerry's last line -- and in so doing utterly destroys Kerry's point that he's "A thinking Democrat."
What Kerry means by "thinking Democrat" is a Democrat who isn't in lockstep with any particular ideology, but who thinks through each question on its own merits. On Social Security and the interests of working people, he sides with the old-fashioned New Deal Democrats. On the importance of a good climate for business, he sides with New Democrats. I set aside whether that's a good position, or whether it's an accurate summary of Kerry's position. But it's a perfectly sensible response to the question, both as a matter of policy and politics -- in fact, it's probably the politically savviest response.
Slate removes the New Democrat part (and its foreshadowing in "on X or Y," which is in context Kerry's way of indicating that he's an old-fashioned New Deal Democrat only on some things). Amazingly, it leaves in the "thinking Democrat" line -- but now there's nothing to make clear that by "thinking Democrat" Kerry means a Democrat who chooses the best of the various strands of Democratic thought. The Slate edit makes Kerry sound as if he's saying that the only "thinking Democrat" is the "old-fashioned New Deal Democrat," which is nearly the opposite of what Kerry is trying to say.
What is the author of this column thinking? He's not editing out the "caveats" and "curlicues" -- he's editing out the heart of Kerry's message. Surely he doesn't think that any message more complex than "I am a staunch New Dealer" or "I am 100% New Democrat" is inherently too "embellish[ed]." But then what is he trying to do?
The U.N. and Anti-Semitism:
This piece (thanks to InstaPundit for the pointer) strikes me as very persuasive. I don't know the facts well enough to vouch for the accuracy of her remarks, but they seem accurate based on what little I do know -- and if they are accurate, then they're a powerful indictment of the U.N.'s double standards on this.
Are happy people nastier and more judgmental?
The ever-insightful Randall Parker suggests that this may be the case. It is one of his "research projects" to think through what kind of children we will seek to genetically engineer, and what kind of world will then result.
Hiibel and the Fifth Amendment
(another in my series of posts today on the Court's Hiibel decision):
The most interesting debate in Hiibel has to do with the privilege against self-incrimination. Though the text of the Fifth Amendment only bars "compell[ing] [any person] in any criminal case to be a witness against himself," the Court has applied this to pretrial questioning as well as to questioning of witnesses at trial -- not unreasonable, since otherwise the provision could be easily circumvented by forcing people to make statements before trial, and then introducing those statements into evidence against them. And the Court has generally held that the Amendment covers not just forced confessions ("I did it, and I'm glad"), but also forced statements that may indirectly suggest the person's guilt, or even that may lead to the discovery of evidence against him. Moreover, as Justice Stevens points out in his dissent, requiring a person to give his name will often lead to the discovery of evidence against him, especially when the police are asking precisely because they reasonably suspect criminal activity.
Justice Kennedy's majority reasons otherwise: "Answering a request to disclose a name is likely to be so insignificant in the scheme of things as to be incriminating only in unusual circumstances." In this very case, the opinion points out that Hiibel himself had no "articulated real and appreciable fear that his name would be used to incriminate him, or that it `would furnish a link in the chain of evidence needed to prosecute' him."
Yet it seems to me that if the police ask suspects their names, that's probably because they think that the name will often help uncover evidence of crime. Maybe, as the majority itself suggests, the name "may inform an officer that a suspect is wanted for another offense, or has a record of violence or mental disorder." Or maybe it will make it easier for the police to more effectively question others about the suspect's behavior, even after they let the suspect go. And when the police have stopped a person because they have reasonable suspicion to think he has committed a crime, and the person wouldn't give his name voluntarily, it seems fairly likely -- not certain (because maybe the person is just very concerned about his privacy, as Hiibel might have been), but fairly likely -- that the name is indeed evidence that can be used against the suspect.
What's more, how would the majority's decision play out in practice? The majority acknowledges that "a case may arise where there is a substantial allegation that furnishing identity at the time of a stop would have given the police a link in the chain of evidence needed to convict the individual of a separate offense," and in such a case, the Fifth Amendment privilege would apply.
But how would such a case be litigated? Say that the police stop me, ask my name, and I take the Fifth. Do they have the right to arrest me, on the theory that I've violated the compelled identification statute? (Of course, the statute must be read as not punishing the exercise of my constitutional rights.) Or do they lack probable cause to believe this, since for all they know my identity may indeed implicate me in a crime?
Of course, in this situation taking the Fifth would itself be incriminating in one sense: Since only those who think they might be incriminated have the constitutional right to take the Fifth, the police would be practically correct to be extra suspicious of people who assert their Fifth Amendment rights. And yet I assume that the police ought not be allowed to arrest someone simply because the person has asserted his constitutional rights, even if a practical person would find it highly suspicious. (When a person asserts his Fourth Amendment rights and refuses to consent to a police search, most lower courts don't let the police build probable cause based partly on the consent -- the police would have to have probable cause independently of the refusal. Likewise, a person's taking the Fifth may not be argued to the jury as evidence of his guilt.)
To be sure, some Supreme Court cases have upheld requirements that people say various things that include their names -- file tax returns, for instance, identify themselves to other drivers when they're involved in an accident, or identify themselves when they're being booked for a crime at the police station. But those decisions have repeatedly rested on the theory that the identification is needed for non-law-enforcement purposes: collecting taxes, facilitating civil litigation, or keeping track of whom the government is keeping in jail. The purpose of Terry stops, on the other hand, is all about law enforcement.
Now maybe the majority's result is still right. Maybe, as some have suggested, the Fifth Amendment should simply bar the use of compelled statements themselves as evidence, but should let the government use as evidence the material that the government gathers indirectly based on compelled statements (the so-called "fruits" of the statement). This might be more consistent with the text of the Amendment. Or maybe there should be a separate rule for compelled self-identification, which one might say is less likely to offend the principles behind the privilege against self-incrimination, whatever those principles may be (there's a hot debate about that).
But it seems to me that on its own terms, the majority's argument isn't terribly persuasive; Justice Stevens seems to have the better of it.
Hiibel and the Fourth Amendment
(another in my series of posts today on the Court's Hiibel decision).
The Court held that forcing someone to reveal his name doesn't make the stop an unreasonable seizure. The proper test, the Court said (correctly, given the Court's precedents), is that "The reasonableness of a seizure under the Fourth Amendment is determined 'by balancing its intrusion on the individual's Fourth Amendment interests against its promotion of legitimate government interests.'" The government interests, the Court said, are strong:
Obtaining a suspect's name in the course of a Terry stop [i.e., a brief stop based on reasonable suspicion] serves important government interests. Knowledge of identity may inform an officer that a suspect is wanted for another offense, or has a record of violence or mental disorder. On the other hand, knowing identity may help clear a suspect and allow the police to concentrate their efforts elsewhere. Identity may prove particularly important in cases such as this, where the police are investigating what appears to be a domestic assault. Officers called to investigate domestic disputes need to know whom they are dealing with in order to assess the situation, the threat to their own safety, and possible danger to the potential victim.
Curiously, the Court didn't go into detail about the "Fourth Amendment interests" side of the balance; but presumably it concluded that requiring to give one's name isn't a very serious intrusion into privacy.
Justice Stevens didn't reach the Fourth Amendment question. Justices Breyer, Souter, and Ginsburg rested almost entirely on past cases:
Justice White, in a separate concurring opinion [in Terry v. Ohio], set forth further conditions [beyond the majority's requirement that a brief stop be based on reasonable suspicion]. Justice White wrote: "Of course, the person stopped is not obliged to answer, answers may not be compelled, and refusal to answer furnishes no basis for an arrest, although it may alert the officer to the need for continued observation."
About 10 years later, the Court, in Brown v. Texas, 443 U.S. 47 (1979), held that police lacked "any reasonable suspicion" to detain the particular petitioner and require him to identify himself. . . . The Court referred to Justice White's Terry concurrence . . . [, a]nd it said that it "need not decide" [whether an identification requirement was constitutional].
Then, five years later, the Court wrote that an "officer may ask the [Terry] detainee a moderate number of questions to determine his identity and to try to obtain information confirming or dispelling the officer's suspicions. But the detainee is not obliged to respond." Berkemer v. McCarty, 468 U.S. 420, 439 (1984) (emphasis added). See also Kolender v. Lawson, 461 U.S. 352, 365 (1983) (Brennan, J., concurring) (Terry suspect "must be free to . . . decline to answer the questions put to him"); Illinois v. Wardlow, 528 U.S. 119, 125 (2000) (stating that allowing officers to stop and question a fleeing person "is quite consistent with the individual's right to go about his business or to stay put and remain silent in the face of police questioning").
This lengthy history — of concurring opinions, of references, and of clear explicit statements — means that the Court's statement in Berkemer, while technically dicta, is the kind of strong dicta that the legal community typically takes as a statement of the law. And that law has remained undisturbed for more than 20 years.
There is no good reason now to reject this generation-old statement of the law. There are sound reasons rooted in Fifth Amendment considerations for adhering to this Fourth Amendment legal condition circumscribing police authority to stop an individual against his will. [Citing the Stevens dissent; more on the Fifth Amendment issue later. -EV] Administrative considerations also militate against change. Can a State, in addition to requiring a stopped individual to answer "What's your name?" also require an answer to "What's your license number?" or "Where do you live?" Can a police officer, who must know how to make a Terry stop, keep track of the constitutional answers? After all, answers to any of these questions may, or may not, incriminate, depending upon the circumstances. [Again, more on the Fifth Amendment question later. -EV] . . .
The majority presents no evidence that the rule enunciated by Justice White and then by the Berkemer Court, which for nearly a generation has set forth a settled Terry stop condition, has significantly interfered with law enforcement. Nor has the majority presented any other convincing justification for change. I would not begin to erode a clear rule with special exceptions.
Interestingly, even the dissent doesn't squarely argue against the majority's reasoning about the importance of the government interests, or the majority's implied judgment that the intrusion on Fourth Amendment interests isn't very severe. It relies chiefly on concurrences, on one tangential majority dictum (from Wardlow) and one square statement in a majority opinion (from Berkemer) that is nonetheless still dictum — which is to say, a statement that wasn't necessary to the decision of the past case, and thus isn't binding precedent.
It seems to me that the majority's Fourth Amendment holding is probably sound, though reasonable minds can certainly differ on this. But in any event, even if there are very strong arguments against it, the dissent doesn't really provide them.
Judge Calabresi's violation of Code of Judicial Conduct?
The Curmudgeonly Clerk points this out, and it seems to me that he's right. Here's what the New York Sun reports about Judge Calabresi's comments to the American Constitution Society (the liberal counterpart to the Federalist Society):
The 71-year-old judge declared that members of the public should, without regard to their political views, expel Mr. Bush from office in order to cleanse the democratic system.
"That's got nothing to do with the politics of it. It's got to do with the structural reassertion of democracy," Judge Calabresi said.
Here's what Code of Judicial Conduct Canon 7 says:
A. A judge should not:
(1) act as a leader or hold any office in a political organization;
(2) make speeches for a political organization or candidate or publicly endorse or oppose a candidate for public office;
(3) solicit funds for or pay an assessment or make a contribution to a political organization or candidate, attend political gatherings, or purchase tickets for political party dinners, or other functions.
B. A judge should resign the judicial office when the judge becomes a candidate either in a primary or in a general election for any office.
C. A judge should not engage in any other political activity; provided, however, this should not prevent a judge from engaging in the activities described in Canon 4.
Seems to me that if Judge Calabresi's remarks were accurately paraphrased — a big if, I realize — they were a judge's "publicly . . . oppos[ing] a candidate for public office." Calling on "members of the public" to "expel" an officeholder who's up for reelection certainly sounds like publicly opposing a candidate for public office. (Canon 4 does let a judge say various things related to "the law, the legal system, and the administration of justice," but as I read Canon 7, the prohibitions in 7(A) and 7(B) are absolutely, and only the prohibition in 7(C) is subject to the Canon 4 exception.)
It's possible (though far from certain) that, given the Supreme Court's decision in Republican Party v. White (2002), that Judge Calabresi can claim his speech is protected by the First Amendment, notwithstanding Canon 7. Nonetheless, even if Canon 7 can't be legally binding for that reason, it is (as I understand it) a pretty authoritative ethical judgment about how judges should behave, and thus an important ethical constraint. It seems that the comments at the American Constitution Society meeting transgressed that constraint.
Note, though, that this of course assumes that Judge Calabresi was correctly paraphrased. Subtle differences in his precise words might well make for substantial differences in the legal analysis. If anyone has a tape or transcript of Judge Calabresi's remarks, or can even pass along his personal recollection, I'd love to hear about it.
(This, of course, is an entirely separate criticism from the one I mentioned here.)
New UCLA Law dean:
I'm pleased to say that Michael H. Schill, a law professor at NYU (where he specializes in housing law and policy), has been named dean here at UCLA Law School. I don't know him much, but all I've heard about him suggests that he'll do an excellent job.
What Hiibel decides and what it doesn't decide:
The question in today's Hiibel v. Sixth Judicial District Court decision from the Supreme Court is: Once the police stop a person based on reasonable suspicion that he may be involved in criminal activity, may the police demand that he identify himself (backed by the threat of legal punishment should he refuse, or should he lie)?
The Court's answer: "yes," at least so long as (A) the demand is "reasonably related to the circumstances justifying the stop" (which will almost always be so), and (B) there is no "substantial allegation that furnishing identity at the time of a stop would have given the police a link in the chain of evidence needed to convict the individual of a separate offense" (it's not clear how often this will be so). If condition (A) isn't satisfied, then the person's Fourth Amendment to be free from unreasonable seizures would be violated. If condition (B) isn't satisfied, then the person's Fifth Amendment rights to be free from compulsion to incriminate himself might possibly be violated.
Here are the questions not involved here: (1) May the police stop someone without any suspicion, but just based on an articulable hunch, or a random stop policy, to demand identification? (2) May the police require that the person present some written identification? (3) May the police require identification when the person is driving, or when the person is entering a public building, or in similar contexts? (4) May the police simply ask a person, without the threat of legal sanction, who he is? The answer to #4 is "yes"; the answer to #3 is generally yes, though it depends on the context; the answers to #1 and #2 are still unknown. (UPDATE: Let me clarify briefly my point as to #1 -- as reader Duncan Frissell points out, Brown v. Texas (1979) struck down such random stops when done without any "practice embodying neutral criteria," and when done as part of normal policing. See also Delaware v. Prouse (1979). What is unknown is whether they might be permissible, under the Court's "special needs" cases, when there are some neutral criteria, or when the place is a special location, such as a bus terminal or the environs of some location where security is especially important. There's enough uncertainty in the "special needs" caselaw [which includes cases such as the drunk driving checkpoints] that it's hard to be sure what the result would be there.)
"Like convicted felon and former Attorney General John Mitchell,
Judge Guido Calabresi was appointed by a President who was eventually involved in a scandal that nearly led to the President's removal from office by the Senate. I am not suggesting for a moment that this person is John Mitchell, or is a felon who helped obstruct justice. I want to be clear on that, but it is a situation which is extremely unusual."
Sounds like such a hypothetical statement, if said seriously, would be logically senseless, and an unjustified attempt to smear Judge Calabresi? You bet. And yet I find it hard to distinguish this from what Judge Calabresi seemingly said at a conference of the liberal American Constitution Society (thanks to How Appealing for the pointer):
A prominent federal judge has told a conference of liberal lawyers that President Bush's rise to power was similar to the accession of dictators such as Mussolini and Hitler.
"In a way that occurred before but is rare in the United States . . . somebody came to power as a result of the illegitimate acts of a legitimate institution that had the right to put somebody in power. That is what the Supreme Court did in Bush versus Gore. It put somebody in power," said Guido Calabresi, a judge on the 2nd Circuit Court of Appeals, which sits in Manhattan.
"The reason I emphasize that is because that is exactly what happened when Mussolini was put in by the king of Italy," Judge Calabresi continued, as the allusion drew audible gasps from some in the luncheon crowd Saturday at the annual convention of the American Constitution Society.
"The king of Italy had the right to put Mussolini in, though he had not won an election, and make him prime minister. That is what happened when Hindenburg put Hitler in. I am not suggesting for a moment that Bush is Hitler. I want to be clear on that, but it is a situation which is extremely unusual," the judge said.
Judge Calabresi, a former dean of Yale Law School, said Mr. Bush has asserted the full prerogatives of his office, despite his lack of a compelling electoral mandate from the public.
"When somebody has come in that way, they sometimes have tried not to exercise much power. In this case, like Mussolini, he has exercised extraordinary power. He has exercised power, claimed power for himself; that has not occurred since Franklin Roosevelt who, after all, was elected big and who did some of the same things with respect to assertions of power in times of crisis that this president is doing," he said. . . .
It seems to me rather odd to compare someone to Hitler or Mussolini based on how they were put into power. The loathing attached to the names Hitler and Mussolini, after all, has nothing to do with the means by which they were installed into office. (It might have a little to do in both cases with the thuggery practiced by their followers that helped them get installed into office — but on that score they are hardly comparable with George W. Bush.)
To analogize someone to Hitler and Mussolini on this score is rather like making the hypothetical statement that I quoted at the start of the post. The premises of the analogy may be literally true, but the analogy is so irrelevant that it seems more effective as a smear than as a logical argument.
But beyond this, I'm not even sure that Judge Calabresi's analogy (as opposed to the one in the hypothetical statement) would be literally true. The German President was supposed to select a Chancellor, and the Italian King was supposed to select a Prime Minister, without regard to whether they had won election. Hitler had lost the Presidential election to Hindenburg, but Hitler's Nazi Party had won a plurality of the seats in the Reichstag, so there was nothing procedurally or legally "illegitimate" about Hindenburg's selecting Hitler. In parliamentary systems, the head of state is often called on to make decisions like that. (Churchill, for instance, was appointed Prime Minister without even any intervening popular election; nothing illegitimate about that, either.)
Mussolini's appointment was more closely tied to the military threat that he posed to the government. But even so, I don't think the King's decision to appoint Mussolini prime minister was procedurally or legally illegitimate from the King's perspective (it was illegitimate for Mussolini to act as he did, but Bush surely didn't use paramilitary groups to win Bush v. Gore).
Now perhaps I'm mistaken on this; if so, I'd be glad if people corrected me on it. (And I should stress that Judge Calabresi, who was born in Mussolini-era Italy, doubtless knows much more than I do about Italian history.) But if I'm right, then the supposed problem with Bush — that he was put into place by an illegitimate Supreme Court decision — doesn't even apply to Hitler and Mussolini.
Hitler's and Mussolini's faults did not include Bush's supposed fault. Bush's faults do not include Hitler's and Mussolini's faults. The supposed analogy that Judge Calabresi is making thus seems to have no basis at all. (Note, incidentally, that Judge Calabresi explicitly stressed that he wasn't criticizing President Bush's actions in office: "I'm a judge and so I'm not allowed to talk politics. So I'm not going to talk about some of the issues that were mentioned or what some have said is the extraordinary record of incompetence of this administration.")
So what possible legitimate role does the analogy to Hitler and Mussolini have here?
UPDATE: See also Instapundit's take and Andrew Sullivan's.
Supreme Court lets police demand that person identify himself,
at least if they had reasonable suspicion to stop him. That's what I infer from SCOTUSBlog's report that the Supreme Court affirmed the Nevada Supreme Court's decision in the Hiibel case. Will read it as soon as it's available, and blog more about it.
UPDATE: Looks like the 5 conservative / 4 liberal split; Justice Kennedy, joined by Chief Justice Rehnquist and Justices O'Connor, Scalia, and Thomas is writing for the majority.
FURTHER UPDATE: Justice Stevens dissents solely on Fifth Amendment privilege against self-incrimination grounds, with an interesting argument that I hope to blog about shortly.
Justice Breyer, joined by Justices Souter and Ginsburg, dissent on Fourth Amendment grounds, though generally appealing to one precedent — dictum (which is to say a statement that wasn't necessary to the past opinion's holding), albeit considered dictum, in a past decision — rather than Fourth Amendment text, policy, or broader precedent-based rules. Breyer's opinion also argues that the majority's Fifth Amendment holding may prove unadministrable, but doesn't make any broader defenses of the right not to identify oneself.
That's rather odd, since it seems to me that such a broader defense would have made the opinion considerably more rhetorically effective, to the public, to state courts interpreting state search and seizure provisions, and to future Justices who might be considering whether to adhere to the majority's position, or even to extend it. But perhaps the dissenting Justices thought their time was better spent elsewhere. (It's also possible that they didn't have firm views on the Fourth Amendment first principles involved here, and really were relying solely on the precedent, but given that the precedent is so weak — the dissent acknowledges that it was dictum, and even a judge who thinks precedent should nearly always be adhered to wouldn't feel actually bound by such dictum — I assume that the dissenters really did differ with the majority on the principle as well as the precedent.)
Sunday, June 20, 2004
The Situation in the Muslim World:
This essay, attributed to Israeli scientist Haim Harari, is well worth reading. For example, Harari writes:
Is the solution a democratic Arab world? If by democracy we mean free elections but also free press, free speech, a functioning judicial system, civil liberties, equality to women, free international travel, exposure to international media and ideas, laws against racial incitement and against defamation, and avoidance of lawless behavior regarding hospitals, places of worship and children, then yes, democracy is the solution. If democracy is just free elections, it is likely that the most fanatic regime will be elected, the one whose incitement and fabrications are the most inflammatory. We have seen it already in Algeria and, to a certain extent, in Turkey.
Noisy Hotel Rooms:
The more I travel, the more I am amazed at how little hotel design seems to take into account the obvious goal of helping guests get a good night's sleep by keeping noise levels down. Forget soundproof windows; how about just designing room doors so they don't slam shut so loudly as to wake guests in the next room?