What I’m also going to teach today: a little image processing


My development class also has a lab. The last few weeks have all been teaching them how to get good optics on a research scope, and how to take photomicrographs with my PixeLink camera system. Today I’m going to show them how to appropriately process an image for publication, so they’ll learn a few digital enhancement tricks and a few ethical rules. I lay down a few laws about using image processing on scientific data:

What you must do to your image:

  • You must archive the original data and work with a copy. If I ask to see the original after you’ve enhanced the image up the wazoo, you better be able to show it.

  • You must document every step and every modification you make. You’re going to describe everything either on the image itself or in a figure legend; if this were to be published, you’d probably include it in the Methods section.

  • You must explain the scale and orientation of the image. The scale is usually shown by including a scale bar; orientation may be shown either by including annotations (text describing landmarks in the image) or an explanation in the figure legend, such as that it is a sagittal or horizontal section.

  • You must save the image in a lossless format, such as .png or .psd or .tiff. Do not save it in a lossy format like .jpeg, which can add compression artifacts.

What you may do to your image:

  • You may crop and rotate the image.

  • You may adjust the contrast and brightness for the whole image.

  • You may carry out simple enhancements, like applying a sharpening filter or unsharp masking, to the whole image…but remember, document everything!

  • You may splice multiple images together to produce a photomontage; you can also insert panels with enlarged or otherwise enhanced regions of the image, as long as it is absolutely clear what you’ve done.

What you may not do to your image:

  • You must not carry out selective modifications of portions of the image; you cannot sharpen the cell you care about and then reduce the contrast for other regions, for instance. You should not burn or dodge regions of the image.

  • No pixel operations or retouching: you are not allowed to go into the image and paint your data into existence!

We do a lot of this preliminary basic stuff because I run the course out of my research lab, rather than a student lab. I want to make sure they’re not going to break anything, and also that they know how to do good imaging, a skill they’ll find useful in other courses and in research (years ago when I taught this stuff, we’d also do black&white darkroom work — nobody does that any more, so now it’s all photoshop). The goal is to get them all able to churn out lovely photographic data, so later I can just hand them some nematodes or fruit fly embryos and tell them to do their own experiments and observations, just show me the pretty pictures when they’re done.

It’s a good life, being able to sit back and let students bring me gifts of biological beauty. I think they’ll also be posting some of these to their blogs.

Comments

  1. IslandBrewer says

    I was a graduate student at about the time digital pictures were starting to take off in laboratory use. This was after the Baltimore/Imanishi-Kari thing, and we had jokes about the “data generating buttons” on Photoshop.

    Digital images for publication were still very controversial, and my adviser insisted that all samples that may be pictured be kept (samples frozen, gels dried, etc.), or that he be present at their digital imaging so he could, in good conscience, vouch for the data. We young turks called him paranoid at the time, but in hindsight, I really don’t blame him.

    Oh, yeah, and we used to wear an onion on our belt, because it was the style at the time. But you could only get the yellow onions, because of the war …

  2. paulburnett says

    I take it “North” or “Here be dragons” would not be acceptable annotations?

    How about false coloration? Astronomers do it all the time.

  3. Christopher says

    I am a huge fan of ImageMagik command line tools for scientific work. The library itself it open source so you can always show exactly what happened to your image if you dig deep enough. The command history that converts the original image to publication image can be collected into a script. This is the ultimate proof of methods: rerun the script and everything should come out the same. Photoshop should be used for LOL cats, not scientific imagery manipulation. But then again, IMHO, you cannot reach the zen of scientific image manipulation without applying pixel level functions in something like Matlab/Octave/R/Python. Especially if you have more than three bands of imagery…

  4. Zugswang says

    The last lab I worked in, we didn’t even allow altering the overall brightness and contrast of the image for histological sections, so as to accurately reflect differences in signal across multiple sections using the same development/capturing parameters (though this was all done automatically on a $100K microscope with Z-stacking software that could specify very explicit image capturing parameters)

    Also, regarding exceptions to piece-wise image editing: what about artificially increasing depth of field? I used to take multiple photos of the same embryo at different focal lengths, and combine them manually into a single image so the entire embryo could be seen clearly in a single image, since the camera on our dissecting scope could not do this on its own.

    I know – on one hand, we had a $100,000 robotic confocal microscope, and on the other hand, I got stuck with doing whole-embryo image stacking with a $500 camera stuck on a 20 year old dissecting scope using GIMP.

  5. says

    If you know what you’re doing in photoshop, you do everything with adjustment layers atop the base image. That way, the basic image is right there, untouched, and you can see the effect of each of the adjustments.

  6. says

    Disallowing contrast/brightness tinkering is silly. You’re doing that all the time on the scope, every time you adjust the condenser. It shouldn’t be a principle of data ethics, but of getting good data: the best images will be captured with an adequate dynamic range in the first place, and the best place to improve an image is in the scope, not in post-processing.

    Doing that kind of depth-of-field manipulation is a grey area; I’ve done it too. But you have to explain exactly what you’ve done somewhere in the paper.

    My general rule is that you ought to be able to hand someone your original image and the text description of the operations that you performed, and that they could then reproduce your final image. Picking and choosing focal planes for different regions of the image fails that criterion. Doing it automatically with, say, standard confocal microscopy software, is fair game.

  7. says

    Oh, also, I’ve done some experimental imaging work where I wrote an algorithm that scans a stack of images does a weighted average, where the weighting is determined region-by-region by the spatial frequency. That’s fair, except that I used custom code that nobody else in the universe had.

  8. Christopher says

    My general rule is that you ought to be able to hand someone your original image and the text description of the operations that you performed, and that they could then reproduce your final image.

    Which is exactly why I think scientists should by default use some sort of script for their raster data manipulation. The original+script now becomes your science that you can hand someone and they can exactly reproduce your methods by running the script on the original image, then analyzing the scientific justifications for the modifications present in the script.

    The easiest scripting is ImageMagik combined with simple shell scripting. If you are fluent in another computer language, there is probably a binding to ImageMagik for it so you can do your operations in the language of your choice and that resulting source code is now your methods. Far more justifiable than a list of photoshop operations.

  9. Christopher says

    For some real fun, load a bunch of raster data into a Postgres database using the PostGIS raster extension. With that you can do your initial analysis using an SQL script, extract the distilled data into the programming language of your choice for in depth processing. The whole process from loading raw imagery into the database to the finial synthesized image is fully documented as code; code that can be read rerun by anyone. Original data + text based transformation script is the ultimate in scientific openness. IMHO of course…

  10. cardinalsmurf says

    I thought Photoshop already had the ability to save an entire history of every manipulation made to the source image. Is this insufficient for your purposes? It just seems more efficient to me to be able to save your source image and history in a single file.

    What does your software do that Photoshop cannot?

  11. carlie says

    Why is the line drawn at dodging/burning specific areas? That’s what you do if you add a spotlight to the specimen before photographing (if it’s not on a microscope), and dodge/burn doesn’t change the actual information, just how well you see it.
    (I definitely wouldn’t approve of area-specific sharpening, though)

  12. Christopher says

    Text files tend to be easier to document (the why’s behind manipulation choices), are easier to store in a revision control system of some sort. The best thing is that a simple text file will be able to be read far into the future, long after current software programs are EOL’d.

  13. says

    Dodging/burning has always been dodgy. It worked its way into a lot of scientific photography, but it always left me feeling a little bit dirty.

    Back in the ’70s/’80s I heard a lot of argument about it. Photoshop has made the debate seem rather quaint nowadays.

  14. optimalcynic says

    When I was in academia I made it a personal rule never to hand-edit images, even diagrams. If it wasn’t generated by a script or in some other procedural way, I found a way to script it. Python, Matlab, Tikz, ImageMagick as a previous commenter mentioned – all good options. I had the rule for two reasons:

    1) When my supervisor wanted changes, and he always did, I didn’t have to keep umpteen thousand revisions of the image with various confusing names. I could use a version control system on the script and apply it to the starting data (which *never* changed, being raw data). If he said “it looked better last Wednesday” I could instantly go back and generate a side-by-side to compare.

    2) Integrity – I could give the script and the data to someone else and they could recreate the same image, and see how it was done. I tried keeping method lists but it was too easy to forget a step.

    I broke that rule a few times, and every single time I regretted it – usually sooner rather than later.

  15. Dave Booth says

    I love reading your “classes” posts, PZ – they take me back to those days when my molecular bio knowledge and lab skills were still current. Particularly your earlier one about inter- and intra-cellular signalling was a moment of serious nostalgia.

    I still use the knowledge occasionally when it comes to debunking some of the crazier stuff I see on the ‘net – for example pointing out that you can’t catch a disease from a vaccine that contains not only none of the live virus but not even a complete set of PIECES of the live virus :) I may have spent the last few decades as an IT guy but some things are so daft that even a washed out old has-been like me knows enough to call BS….

  16. says

    Now I’m wondering if your students do any art, design or drawing classes, PZ. There’s sometimes a lot to be said for drawing a sketch of what you think the salient features of an image are, even with all the tools you have for manipulating a captured image.

  17. chrislawson says

    Some journals claim to reject any image manipulation in papers, although I’m not sure if those principles have been tested. I agree that minor adjustments like brightness should be allowable. The basic question should be: does this manipulation create an impression not warranted by the data?

    There are plenty of image manipulation programs that are “non-destructive”. That is, they retain the original image and record the adjustments as a separate file. Aperture is one of many examples. These were designed for photographers so (1) their original images weren’t accidentally erased and (2) that a set of image manipulations could be recorded and applied to large batches of images at a time, thus easing workflow. But the principle still applies for scientific purposes.

  18. chrislawson says

    paulburnett@4:

    I don’t read astronomy journals enough to know what is standard practice, but most false-colour imaging is because the original data was in the non-visible part of the spectrum. I guess a true non-manipulation fundamentalist would insist on seeing X-ray data in actual X-rays…thus making any images invisible and carcinogenic :-)

  19. Spoon says

    IDL is my go-to language for image processing for research. It’s a pretty straightforward language and is great for building figures from processed images.

    I’ll have to muck about with a few of the programs mentioned above, always looking for new things to try.

  20. deee says

    You should teach your students to use open source tools for image processing, so that they won’t be locked in to some particular vendor’s software. Photoshop skills are only useful as long as you can afford to pay for a photoshop license. Skills with FOSS software benefit you your entire life.