Leverage

There is a very interesting discussion going on at Biochem Belle’s blog concerning who in a lab needs to “understand” a method in order for that method to be effectively deployed. Interestingly, the ridiculous claim is being made by some participants in the discussion that PIs need to have experience with a method in order for that method to become part of the expertise of the PI’s lab.

For example, Biochem Belle says:

Let’s say you’re a protein crystallographer–you’re not going to send someone off to another lab to learn how to do protein NMR and then be able to claim this as an expertise in your lab.

This is 100% totally wrong. You damn well can send a post-doc from your lab to a protein NMR lab to learn how to do protein NMR, and then come back to your lab and start doing protein NMR in your lab. And you as the PI need to learn from the post-doc what she is doing, how she does it, and how she interprets data. And you get feedback and gut checks from your colleagues who routinely do protein NMR. And you and your post-doc publish paper(s) with protein NMR data in them generated by your post-doc. And then, other trainees in your lab can learn from the post-doc and from you. Ultimately, without ever having done a single fucken protein NMR experiment in your life, you the PI have become an expert at protein NMR and have incorporated that expertise into the armamentarium of your lab.

I know that as a trainee who is chained to the bench and immersed in the technicalities of what you do, this sounds impossible. But this is because trainees lack the broad vision and perspective necessary to think boldly like this. How fucken boring do you think a lifetime career in science as a PI would be if you never incorporated new conceptual and technical approaches in your lab via the creative efforts of your trainees?

When I was a post-doc I did exactly this: with the collaborative help of another lab that already possessed appropriate expertise, I developed a project in a substantive scientific area using a model organism, neither of which had my post-doc mentor ever had any experience with. Now both this substantive scientific area and model organism are the heart of my post-doc mentor’s research program.

I also do this all the time in my own lab as PI, such as with my grad student who is learning a new model organism with help from a colleague’s lab. When new people join my lab, I will encourage some of them to also work on this other model organism, and it will become incorporated into my lab’s expertise. Another way to bring new expertise into your lab is to recruit post-docs with particular new expertise, and then leverage off of their talents to create a self-sustaining expertise in the culture of the lab. You can also send trainees to take intensive summer courses at places like Cold Spring Harbor Laboratory or MBL, and then they come back to your lab and develop the expertise there.

This all seems implausible and scary to trainees, but that is because of the very limited scope of their vision.

Enhancing Peer Review

Jeremy Berg–director of the National Institute of General Medical Sciences–has recently been posting some fascinating analyses of the recently modified NIH peer review process at his blog, including the relationship among overall impact scores, percentiles, and funding decision, as well as the correlations between the various criterion scores–significance, innovation, investigator, environment, approach–and overall impact scores. This is all fascinating stuff, and Jeremy and NIGMS are to be lauded for their openness with these data. Also, I am a big fan of the new application format, scoring system, and critique format.

However, my cynical theory is that the purpose of Enhancing Peer Review was not to make peer review “better”, because it already did its job perfectly fine: indentifying roughly the top quartile of applications in any given round of review. Rather one purpose of the Enhancing Peer Review effort was to placate the extramural community who was up in arms at the peer review outcomes that were being used to perform an intrinsically impossible task: identifying the top decile of applications. (This task is impossible, because there simply are no objective differences in “quality” within the top quartile that anyone can agree on.)

Since every investigator whose grant is judged in the top quartile is outraged at the indignity of not being judged in the top decile, something needed to be done to make these investigators feel that their concerns were valued and that the system would be made more “fair”: i.e., would judge all their grants in the top decile. Of course, this is mathematically impossible. The most interesting, but impossible, analysis of the new reviewing system is not post-hoc analyses of what reviewers are doing, but rather a direct comparison of assigned grant percentiles between the old and new systems. My guess is that the old system and new system would identify the same grants as in the top quartile, and almost all the same grants in the top decile (perhaps with some small differences: there may be some investigators whose grantsmanship styles are better suited to the old or new systems).

The other purpose of Enhancing Peer Review was to dramatically streamline the system, to make the peer review process faster and easier and more efficient on a per-grant basis. I think it actually did this job quite well, and I enjoy writing grants and reviewing them in the new system more than in the old. But the only metric that can tell us if peer review was “enhanced” in the sense of “improving outcomes” is whether there would be differences in the percentiling of particular grants in the old versus the new system.