Following the debate on Elsevier and open access publication, I have noticed increasing rhetoric about journal impact factors. A journal impact factor is the mean citations received in a given year by articles published in a journal in a number of preceding years. It's long been known that this measure is a rather imperfect indicator of the citation potential of an article published in that journal as the distribution of citations received by articles in any given journal is very dispersed and skewed. A few star articles often get the majority of citations and many more articles are cited little or not at all.
There is a correlation between a journal's impact factor and the number of citations individual articles in the journal get, though it is low. Obviously, it is better to evaluate papers by the number of citations they get themselves. There is an implicit assumption that citations accrue very slowly in the social sciences and especially in economics, so that citations are not useful for evaluating recent research and hence either impact factors or costly secondary peer review are used instead.
We are having an internal debate at the Crawford School about whether to use metrics and which metrics to use in allocation of research funding within the school. So, I have been thinking that if impact factors are only weak predictors of citations are they more strongly correlated with something else? That something else could be the selectivity of journals aka rejection rate. It seems very likely that there is a strong correlation between rejection rate and impact factor.
The problem in testing this is that there seems to be only limited data on rejection rates. I've done a very quick search on this and came up with some information. de Marchi and Rocchi found a significant partial correlation (p=0.04) between rejection rate and impact factor for a sample of 72 journals from all disciplines that responded to their survey. For the ecology journals that Aarssen et al. looked at the correlation between rejection rate and impact factor was 0.687, though there are some low impact journals with quite high rejection rates:
The authors argue that this may be because these low IF journals are flooded with low quality papers that they have to reject. A high rejection rate among a pool of low quality research isn't the same as a high rejection rate among a pool of higher quality research. Still, despite this, IF does seem to be a better predictor of journal selectivity than of citations and on the whole we might think that papers that pass more selective processes tend to be higher quality.