I just returned from serving on an NIH review panel. Suffice it to say that experts love to disagree -- but for very good, discipline-based reasons. This creates challenges for the sponsor, who must interpret the scores and allocate scarce resources to support a small fraction of proposals that survive the gauntlet of critiques. But -- there's always a "but" -- how does the process function to allow for these well-documented decisions?
The following is clearer to me as a result of my latest experience:
- Reviewers do not grant "benefit of the doubt.\ If the words are not on the page, the proposer is assumed to have overlooked, disregarded, or otherwise missed the obvious, which can be elevated to the status of a fatal flaw. Yet the discussion around the table is what I would call a "public tutorial.\" The intellectual diversity -- and benefit -- is immense.
- Review panels are over-specialized. Individual members know what they know, but don't like to admit what they don't know. How this is converted into a collective decision -- not a consensus -- is hardly precise, perhaps not even an "art.\" Program staff must translate and interpret: does disagreement signal that something special is proposed, more on the edge than in the mainstream thinking within various disciplines?
- The amount of brainpower, time, and energy invested in producing a proposal and conducting its review is an enormous overhead on our commitment to merit and excellence. Something must give in 21st century peer review without diluting quality and our competitive spirit. I think the phrase of the day is "educate to innovate."
So bless Ph.D. researchers and a process in which they play a multiplicity of roles. They are locked into a process -- no, a system -- not of their making, yet one they can neither change much and on which their career fortunes largely depend. And the answer to the question in my title? As many as we can get!