As the government becomes more interested in determining the impact of federal science funding, we have to figure out how to measure the productivity of research. Should we measure productivity in the number of publications, citations, and patents, or should we use an indicator like Hirsch's h index?
If we measure the number of publications, people can easily publish their work in smaller pieces to get more papers out of one project. If we focus on the number of citations, we face problems with people citing themselves and padding papers with unnecessary numbers of references.
Patents aren't a reliable measure either, as researchers from some fields are less likely to apply for patents than others. For example, you wouldn't expect theoretical physicists to apply for as many patents as chemists working on drug design and synthesis. It's also tough to take impact factors into account, as that's highly field dependent, too.
In addition, there's Hirsch's h index, a commonly used gauge of research output that is based on the distribution of citations over a scientist's rank-ordered publications. Unfortunately, the h index suffers from problems like field dependency and misuse of references, as well.
It seems like we definitely know what will NOT work in measuring the productivity of scientists, so what will? There doesn't appear to be a good solution that will work across all scientific disciplines. We know that none of the measurable factors effectively paint a complete picture of a researcher's success. Field-dependency appears to be the most common problem that pops up. Perhaps, we need to take that into account by doing some sort of normalization, although it will be difficult to make clear distinctions among different fields when large collaborations occur frequently. With all this uncertainty, it might not be a bad idea for the federal science agencies to fund research, or thoroughly conduct their own investigations, that will determine how to best measure scientific productivity.