The value of the “impact factor”of a journal (which we will term the ‘reference journal’) in any particular year is usually defined as the aggregate number of citations made in that year (in any article in any journal) to articles published in the reference journal within the preceding two years, divided by total number of articles published in the reference journal within the preceding two years. By way of a concrete example, consider the highly cited publication known as The Halibut. Four hundred articles were published in The Halibut in 2002 (just kidding—we weren’t around then), and 335 articles were published in 2001. Furthermore, in 2003, articles from the 2002 issue of The Halibut were cited 11650 times, and articles from the 2001 issue were cited 13280 times. The impact factor for The Halibut in 2003 can then be calculated as
Now to give you an idea of just how good an impact factor that is, you should know that only a few journals, like the New England Journal of Medicine, for example, have impact factors anything like that. Most journals have impact factors less than 1, meaning that most articles published in most journals are not cited at all within two years of publication.
Academic publications tacitly compete with one another to gain the highest “impact factor”, while authors, especially those living by the motto, “publish or perish”, seek to have their articles published in journals with high impact factors.
For reasons that are obscure, ill-considered, and possibly irrational, the funding that is provided to a university might become partially dependent (as in Australia) on the impact factor of the journals in which the staff of the university have published their articles. But the impact factor often doesn’t tell one what one wishes to know, which might be something like “How much influence did such a such an article, as opposed to the journal, have on the world as we know it?” Of course, it might be a bit much to expect a single numerical value to provide an answer to that question, but it would nonetheless be handy to have an indicator of some sort.
The contributors therefore propose that a Citation Weighted Impact Measure be developed as an indicator of the importance of an article and to help resolve questions such as the following. Is my article, published in an obscure journal (impact factor 0.1) but cited four times in the prestigious imprint The Halibut (impact factor 33.9) less important than an article published in The Halibut but only ever cited in journals that no one reads or cites? How should two such articles be compared? Is a paper published by Isaac Newton in 1678, and cited only four times in the subsequent two years, to be considered unimportant even though people still cite the paper today?
Surely there must be a way of sensibly weighting variables like (a) the journal in which an article is published; (b) the total number of citations of that article to date; (c) the distribution of citations of the article in the years since publication; and (d) the journals in which the various citations have appeared, to arrive at an informative measure of the influence and importance that an article has had.
We think it unlikely that such a measure is going to be developed without reference to people’s views, opinions and beliefs about how important different (actual) articles and journals are. In other words, I think that the measure must be developed empirically and that a purely theoretical approach is doomed to fail. Multi-attribute Utility Theory [Ref. 1] might well provide an appropriate framework within which to develop the weights used in a Citation Weighted Impact Measure. Furthermore, the measure might make use of some of the results of graph theory, especially as the process of citation can be seen to map easily onto the notion of a directed graph. Journals, contain volumes and issues, in turn contain articles, so that one might consider each journal to represent the root node of a tree, and the volumes, issues and articles to constitute the deeper level nodes of the tree. The citation by one article of another article would be represented by a directed edge of the graph connecting one (article) vertex with another. Similarly each author of each article would be connected by an edge to the articles that he or she had written. While MAUT might lead to the development of appropriate weights, graph theory could provide the basis for developing a method of aggregating the values and weights associated with each article-vertex.
References
- Winterfeld, D. von, & Edwards, W. (1986). Decision Analysis and Behavioral Research. Cambridge, UK: Cambridge University Press.
Contributors: Daniel D. Reidpath, Mark R. Diamond, Angela O’Brien-Malone