A couple of people have mentioned to me that they have trouble getting good looking photomicrographs. I thought I’d write up what I do. There are a lot of articles about this, particularly from a theoretical perspective. This is what has worked for me in a practical sense.
Note that this is for good old 19th-century brightfield light microscopy, primarily with Hematoxylin-Eosin stained images. Other forms of microscopy, such as confocal microscopy, are whole different ballgames. In terms of postprocessing, this is a big deal with attempting deconvolution for image restoration. To increase resolution in confocal microscopy, you can measure the way the image is blurred by taking a picture of what you know is a point or small round dot, measuring how blurred it is, and then reverse it. There are very nice software packages for this, and I’ve done it. For brightfield microscopy, I have yet to find a way to make those little dots in 2D. The glass beads I’ve tried that work in confocal microscopy just don’t work for 2D brightfield images for me. If anybody knows where to get 2D dots for this purpose for 2D image deconvolution for brightfield microscopy, please let me know in the comments.
First, let’s look at the most common errors that occur when taking photomicrographs, and then we’ll try to fix them.
Anisotropic illumination:
Anisotropic illumination is a fancy way of saying that the light coming through the microscope is not the same across the whole field. Usually it is darker at the edges of the field and brighter in the middle. In addition, small pieces of dust in the light path can result in blurry dots and such. The preprocessing solution is to have a good microscope, clean light path, and clean slide — but it will always be an issue. The postprocessing solution is to normalize using a brightfield image, and maybe a darkfield image.
Here’s an example of what my brightfield image looks like for the camera in my office. I take my photos at fairly low illumination. Generally speaking, it’s easier to “fix” a darker image than one that’s washed out. In most cameras you can see the histogram in the display. I set my camera so that the peak of the histogram is right at 50% of the dynamic range. In this image you can easily see that the illumination is darker at the edges,
In addition there are some serious point defects. I’m going to stretch the dynamic range of the image to show how bad it really is. The blue arrows point to dirt on the sensor. They are very small, and almost invisible with high illumination, but they show up against bland backgrounds. These are a big problem for me. No matter how hard I try to clean the sensor of my camera, I get to a point where I get new stuff from the air or from the brush/cloth/cleaning spray deposits as fast as i clean off old stuff. The yellow arrow points to the shadow of a bit of dust in the light path. I don’t know what lens it’s on, but again, it’s almost impossible to get perfectly clean.
Of course, it’s best to have the cleanest light path you can have. It’s always easier to fix rare minor blemishes than to keep chasing stuff from a dirty light path, but depending on your environment and resources, you can only get so far.
Here’s a close-up of the dirt on the sensor
Here’s a similar area in the original brightfield image. It’s hard to see when you are looking at the whole image, but it’s there:
You usually don’t notice it in an image unless there’s a bland background. When I use this camera for landscape photograpy, it’s invisible when looking at trees, grass, etc. but can be obvious in a clear blue sky.
It’s disgusting and very frustrating.
Spherical aberration:
Spherical aberration is a function of the curvature of lenses in which the center of the image and the edges of the image don’t focus at exactly the same point. Thus, you can have the center in focus but the edges a little blurry, or the edges in focus but the center a little fuzzy. The preprocessing solution is to use a microscope with good lenses. This is not a problem for me. There are a number of attempts at postprocessing solutions. The one I use, when I bother, is so-called “focus stacking.” I don’t have a good picture of this since my microscope isn’t prone to it much.
Chromatic aberration:
Light going through a lens bends a little. When it bends, the degree of bending is different for different colors. It’s minimal in the middle of the field, but most obvious at the edges of the image, so at the edge of the image you get a prism effect. This is a minor but real problem for my microscope. The postprocessing solution is to warp the red, green, and blue images so they come into alignment. For me, this works *a little*.
Here’s a picture of a renal abscess. I have the condenser cranked down to increase the effect.
Here’s a closeup The image on the left is more from the middle of the field, and the image on the right (with the arrows) is from the edge. You can see that the colors are smeeared, with blues on the right and reds trailing off to the left. On the other side of the image (not pictured) it’s the other way — the reds are to the left and blues to the right. The effect is more prominent at the edges of the field.
Focus:
There’s a limit on how well an image can be focused, and sometimes you just don’t get it right. This is a hard problem to fix for 2D light microscopy. I’ll talk about this more later, but the best solution is to use the best microscope you can with the best camera you can, and take the best picture you can. The postprocessing solution is deconvolution, which I have a very hard time getting to work at all. I can do it for confocal images, but not for light microscopic images. The reason I find it hard for 2D microscopy is that in order to sharpen well, you need to model how the image is blurred. The easiest way to do this is to take a picture of a small dot and measure how it’s blurred. For 3D confocal microscopy, folk sell small beads specifically for this purpose. I haven’t found a place that sells small dots for doing for light microscopy. Some people have had good luck with estimating the blur function artificially, but it hasn’t been a big victory for me. Newer methods involve using AI to fill in the data with what other images have. I’m not a big fan of that at the moment.
Improper exposure:
I see a lot of images that are too dark or too bright. The preprocessing solution is to be careful about the picture you are taking. The postprocessing solution is histogram manipulation.
Poor dynamic range:
This is often a consequence of poor exposure. Dynamic range is how much of the space between black and white is used. Images with a poor dynamic range look dull For instance, let’s say you can display brightness from 0 (black) to 100(bright white). Theoretically, the bright spots in your image should be 100, and the dark should be 0. That exploits the full dynamic range. Now, let’s say that you underexpose the image, and instead go from 0 to 30. The image is dark, and only exploits 30% of the available displayed light and dark. You can also get this with poor staining of the tissue. The preprocessing solution is to stain well and expose well. The postprocessing solution is usually histogram manipulation, though I’ve also gotten some good results with high-dynamic range processing.
Bad white balance:
The background of the image should normally be white (unless you are doing polarization and have a black-ish background). It almost never is. My halogen light is yellowish at the illumination level I use. The preprocessing solution is to use whiter light or filters (the common solution is a blue filter in the light path). The postprocessing solution is to adjust white balance in an image-processing program.s I have found that the solution to anisotropic illumination usually also takes care of white balance.
There are others, but if you take care of these, you are probably good to go.
The concept of “good enough.”
It is possible to really go off the deep end with this stuff. Years ago, I was obsessed with fixing everything. But, I’ve found that most people are happy to see a “good enough” photo that looks good and shows what it needs to show. Moreover, the more you postprocess an image, the more you are likely to start making it look artificial. Some postprocessing steps provide very obvious improvement with minor effort — background subtraction, color balance, histogram stretching, etc. Some provide very little improvement if you have a good scope and camera — fixing chromatic aberration, focus stacking, etc.
So, while I’m going to discuss a number of things, it may not be necessary to do all of them to get a “good enough” photo.
A quick note about software. In this series, I’m only going to use open source software. There are very good proprietary packages around. I’ve heard good things about MATLAB and others. But… I’m cheap so I don’t use them. I’ll go into the software packages I use at some length later.
Olympus LED light source, microscope objectives and software for their current DP28 4K camera obviate many of these issues for service work. The LED light source in particular is a real contrast with the 100 watt halogen ones I used before. I just wonder if/when the technology that the Olympus holds hostage will be freed. The lockdown security for the software are elaborate. The product is great but $$$$$. Every single microscope in my prior surg path practice has this set-up together with stacked 4K monitors.
Sure. This i all software and it’s not rocket science. If you can afford to pay for it, then that’s a solution. I really can’t afford it, so I have to do it by hand.
Hey people!!!!!
Good mood and good luck to everyone!!!!!
Hey people!!!!!
Good mood and good luck to everyone!!!!!