Monday, April 04, 2005

"Data not available until we finish our book."

The recent publication of a study by Rothman, Lichter and Nevitte which, they argue, demonstrates not only that there is a lack of diversity among American university faculty, but that this lack of diversity is due, in part, to discimination, has caused some blogosopheric hubub. Predictably, conservatives are saying "d'uh!" as liberals cry foul. But what does the study really show? I decided to see for myself.

First, I read the paper, but it is fairly stingy on the details, so I thought I'd get ahold of the data myself. So, I emailed the first author of the paper, asking for the data. Anyone in science will know that it's pretty common for scientists to ask each other for the data from published papers, and in fact, it's generally accepted that the data behind published results should be made available. However, I was brushed off by Rothman with a one sentence response, "Data not available until we finish our book."

So, without the data, I'm forced to take the authors at their word (there's no information about the peer review process for the journal, The Forum, at the journal's website). And what the authors say about the size of the disparity between "liberal/left-leaning" professors and "conservative/right-leaning" professors is interesting. They report that in their sample of 1643 faculty from 183 American universities, 72% are left-leaning/liberal (as measured by self report on a 10-point scale), and 15% right/conservative (the percentages in the general population are 18 and 37% liberal and conservative, respectively). Furthermore, 59% identified themselves as Democrats, and 11% Republican. This left-right disparity is most profound in the humanities and social sciences, as one might expect, but exists even in the natural sciences, and to a much lesser extent, in engineering and business departments. They have a nice table in which the percentages are broken down by department, if you're interested (it's on p. 6 of the pdf linked above).

But as I've said before, showing disparity is one thing, explaining it is another. If you believe that ideological diversity is important in academia, then you want to do something about it, you have to understand why disparities exist. With that in mind, the authors used multiple regression (much as I proposed, though they used far too few predictor variables, and their dependent variable is highly problematic; see the link at the beginning of the paragraph for my suggestion) to measure the effect of several variables, including political orientation, on academic success (as measured by the quality of the school at which a professor is employed). What they found, in this analysis is striking... in its worthlessness. They conducted three different regressions, one using "political ideology" (as measured by answers to several questions about political issues), one using party affiliation, and one using "left-right self designation" as predictor variables. Each of the regressions also included variables such as gender, religion (Christian or Jewish), race, sexual orientation, and a combination of several measures of academic acheivment, including the number of publications, time spent conducting research, membership on editorial boards, and attendance of international conferences. If discrimination is at play, then the three measures of political attitudes (ideology, party affiliation, and left-right self designation) should account for a significant amount of the variance in academic sucess even after the effects of the other variables (gender, religion, race, academic acheivement, etc.) are accounted for.

After conducting the three regressions, the authors conclude that discrimination based on political leanings is at play. Why? Well, they got statistically significant (at the .01 or .0001 level) for both party affiliation and "political ideology" (the regression coefficient for left-right self determination, which is the only measure of political leanings to receive its own table in the paper, and overall receives the most attention in the first half of the paper, is not significant!). But with N's so big, it's not surprising that coefficients would be significant. A more interesting question is, what is the effect size? Well, there's a straightforward way to determine that: the normalized regression coefficients are a measure of the amount of the variance accounted for by the variable. How much of the variance do "political ideology" and party affiliation" explain? Less than 1% (β = .086 and .073 for the "ideology" and affiliation regressions, respectively). Less than 1%! (By way of contrast, the acheivement index accounts for around 15% of the variance). So is there discrimination? Apparently, but not a whole hell of a lot. Interestingly, if we were to conclude from very small but statistically significant (with a large N, and only a few parameters) that ideology-based discrimination exists, we would also have to admit that institution-wide gender-based discrimination exists. In other words, conservatives who want to use this study to argue that ideology-based discrimination exists, will have to change their tune on the existence of gender-based discrimination, as gender accounted for almost as much of the variance in academic success as political ideology (less than 1%, β = .06).

P.S. I wonder if David Horowitz will say that the multiple regression portion of this study is pointless, or if he will use it as more evidence that discrimination exists. If he does the latter, then I have to wonder why he thought my suggestion of a nearly identical study (though I had planned to use more predictor and dependent variables) was a "ridiculous exercise."

No comments: