Show your support for community science. Donate to Public Lab »

# Dual bandpass filters

by cfastie | 24 Nov 04:01

Above: I don't really understand how interference filters work, and I'm pretty sure the person who drew this diagram didn't either. source

Plant health indices like NDVI are based on a comparison of the amount of visible light versus near infrared (NIR) light that is reflected from leaves. Often two cameras are used to capture separate photos of visible and NIR light. Because of the way consumer digital cameras are designed, it is possible to modify one so that a single camera can capture both visible and NIR images every time a photo is taken. Single cameras generally do a poor job separating visible and NIR light, so this approach is inferior to a dual camera system. Kites can lift two or more cameras even in light winds, but most UAVs (drones) costing less than $1000 cannot do so safely. So there is much interest in single camera NDVI systems even though results from dual camera systems are substantially more useful for plant health analysis. NDVI is traditionally derived from satellite data which include images in several different wavelengths. The standard bands used for NDVI are a red band between 600 and 700 nm (but usually only 50 to 90 nm wide) and an NIR band somewhere between 700 and 1000 nm. Since the first LANDSAT was launched in 1972 (far left), the bands available for computing NDVI have varied. The NIR band has varied more than the red band. Both have usually been narrower bands than those typical of DIY NDVI systems made from consumer cameras (far right). Consumer digital cameras capture three photos every time the shutter is pressed. Each photo captures light which is mostly a single color (red, green, or blue) but can also include a little bit of all colors. The three photos (color channels) are combined to make a normal photo. The wavelength range of each color channel of a digital camera includes most visible wavelengths (400-700nm) but is dominated by a single color (a ~100nm range of wavelengths). When the filter which blocks near infrared light (which would ruin normal photos) is removed from consumer cameras, all three color channels capture NIR light in addition to the single color normally captured. Two examples of the spectral sensitivity of consumer cameras after the IR block filter has been removed. All three color channels also capture NIR light to varying degrees depending on the wavelength of the NIR light. Photos from cameras without their IR block filters are not useful for plant health analysis because visible and NIR light is mixed in each channel. By adding a new filter, it is possible to allow only one type of light to be captured by a channel. For example, if a red filter replaces the IR block filter, no blue or green light can reach the sensor and only NIR light will be captured in the blue and green channels. The red channel will capture NIR, but also red light, and the two will be mixed in an unknown proportion. So although this modified camera can capture a mostly pure NIR image (e.g., in the blue channel) it cannot also capture a mostly pure visible image. A long pass red filter (the curve for a Wratten 25 is shown) passes red light and also longer NIR wavelengths, but does not pass most green and blue light. In modified consumer cameras, the green and blue channels can capture rather pure NIR images. The red channel will capture red light, but also much NIR which will be mixed with it in a proportion dependent upon the sensitivity of the particular camera to NIR. Another type of filter can accomplish the same thing, but can pass much narrower bands of the desired colors. Interference or dichroic filters are not just colored glass which absorbs the unwanted wavelengths, but are constructed of thin layers of glass with reflective material between the layers. By selecting the reflective materials and thickness of each layer, all unwanted colors can be prevented from passing through. This allows not only the transmission of very narrow bands of color, but the transmission of multiple bands. Transmission curve for a dual bandpass dichroic filter. Only red light (620 to 700 nm) and mid-range NIR light (810 to 900 nm) are transmitted, and as much as 90% of those wavelengths are transmitted. No other colors can pass through this filter. When the dual bandpass filter above is used on a camera without its NIR block filter, the captured photos are similar to those captured with a Wratten 25 filter (figure above). However, much narrower bands of each color (red and NIR) will be captured because no other light can enter the camera. Compared to the result with a Wratten 25 filter, the width of these bands will be much more similar to the satellite data used to derive NDVI. Although the captured bands are narrower, the problem of mixed red and NIR light persists. The camera’s blue and green channels will receive very little visible light, so most of what they capture will be NIR. But the red channel will capture lots of both red and NIR light. So dual band dichroic filters solve one of the important problems with DIY NDVI systems (wavelength bands are too wide) but not the other one (visible and NIR are mixed). The filters do not pass as much light, so the fast shutter speeds desired for aerial photography may not be possible (the photo below was taken at ISO 80, 1/50 second, f/2). The primary obstacle to DIY use of dichroic filters is cost (typically$100 to \$400 per filter).

Normal color photo of a test scene for comparing filters. Taken with a PowerShot S110 with its internal IR block filter removed, but with another IR block filter screwed onto a filter tube in front of the lens.

Test scene captured with a dual bandpass filter with a transmission curve similar to the one in the figure above (this one has peaks centered on 660 nm and 850 nm). Before this photo was taken, a custom white balance was performed using a piece of red origami paper in the sun.

NDVI image from the photo above. NDVI values on the color bar range from -1 to +1.

@cfastie -Great work Chris. Have you tested to see what your max shutter speed is with the DB filter on a normal sunny day with same ISO or slightly higher?

Is this a question? Click here to post it to the Questions page.

I took another batch of photos (research note soon) with ISO 200 and the shutter speed at 1/30 second. The f-stop varied between f/2 and f/5.6. It was an overcast day at the winter solstice, so about as dark as you would ever want to take this type of photo. The camera was on a tripod, so image quality was okay. Even on a bright day in summer, handheld photos with the dichroic filters will require some care. Aerial photos will be a challenge, but at ISO 1600 you might be able to use a shutter speed of 1/400 second. ISO 1600 will introduce scads of noise if the camera is a point and shoot, but if your goal is NDVI, that might not matter so much. Color balance might also be affected by ISO 1600 and that could impact NDVI results.

A shutter speed of 1/400 could produce acceptable aerial photos if the camera was lofted by a kite on the perfect wind, or by a balloon on the perfect non-wind. So aerial photos with these dichroic filters is possible, but everything has to be in your favor and image quality will always be greatly compromised compared to other filters (e.g., Wratten 25).

Chris

Hi, thanks for the post. I'm planning to re-convert my NIR-adapted camera (currently with a blue filter) due to its rather poor performance in detecting plant stress. Would you recommend a dual-band filter (e.g. red + NIR, DB660/850) over a simple red filter (e.g. 630 nm)?

Is this a question? Click here to post it to the Questions page.

The primary advantage of dual band filters is that they pass much narrower bands of wavelengths than red or blue filters (which also pass NIR). These narrower bands can make the NDVI results easier to interpret and can provide better information about plant stress.

Good NDVI results with either type of filter require the same type of careful calibration or the kludge of an artificial custom white balance. They also require estimating the proportions of NIR and visible light which are mixed in one channel. They also require well exposed photos and proper post processing (e.g., applying color schemes). There is no guarantee that a dual band filter will provide better results than a colored filter. If you haven't been able to get good results with a good blue filter, a narrow band filter won't necessarily fix the problem.

Dual band pass filters introduce some new difficulties:

1. They are usually very expensive.
2. They don't work well when placed in front of the lens (see this note).
3. They are often thicker glass than the IR cut filter they replace inside the camera and with some cameras can make focussing a problem.
4. They do not pass very much light so exposure times must be longer.

If you can work with these issues, your results could improve.

Chris

Hi Chris, Many thanks. I managed to get a DB660/850 filter and should have my camera (Olympus E-M10 Mark II) converted this week. It did cost an arm and a leg, but I hope it will be worth it :)

We are running a drought experiment, and my goal is to see whether the converted camera can detect sings of (progressing) plant stress. The experiment will take several weeks and will take place in a glasshouse (under natural light, though mostly diffuse), meaning that I will need to compare images taken on different days, with different light conditions. The (ambitious) goal is to analyse the changes in NDVI in the studied plants during the experiment.

I'm well aware of the numerous issues with single-image NDVI estimation; however, I'm still not sure how to tackle them. I will be saving all images in RAW and have worked on a simple Matlab code to pre- and post-process the images for NDVI analysis. This will hopefully let me fix some of the mistakes (e.g. incorrect white balance) in image processing, but if the mistakes can be avoided, I would rather not make them in the first place.

My main concerns are: 1. White balance - the DB660/850 filter is relatively new and I haven't found any recommendations for white balance settings (particularly for NDVI). More importantnly, I'm not sure whether it's better to set a new custom white balance each day (e.g. on a grey/blue/red card in shade or grass) or keep using the custom white balance selected on Day 1 of the experiment, regardless of the changing light. I'm leaning towards the first option, as the second can be later implemented in image processing, if needed.

1. Spectral sensitivity of red and blue channels and the 'contamination' of the VIS channel (in my case, red) with NIR - I've read Ned Horning's great articles about calibration, but I must say the camera calibration for NDVI is still rocket science to me.

You are correct that your multi-day observation will present some problems. It is possible that doing a custom white balance on each day will compensate for the day-to-day variation in light quality. But I am not aware that this has ever been tested. Even if it worked, light quality varies throughout the day, minute by minute if there are clouds.

A better way to compensate for varying light conditions is to include two or more calibration targets in each photo. If you know the proportion of 660nm to 850nm light reflected from those targets, you can adjust the photos before they are used to compute NDVI. Ned's plugin for Fiji does all the math.

Targets can be expensive, but are probably worth it in this case. You can make your own targets if you know someone with a spectrometer. All you need to know is if you shine the same brightness of 660nm and 850nm light on the targets, how much of each is reflected.

I don't think this calibration process will completely solve the problem of contamination of the red channel with NIR. But it might reduce the importance of that problem.

Make sure you know what your camera does with custom white balance information when camera RAW images are captured. The RAW data itself ignores white balance, but sometimes the white balance information is saved with the file and can be applied to the RAW data afterwards. When that happens, the resulting photo is no longer a RAW image.

Chris

1) Regarding the reflectance targets. I bought a dozen of ceramic tiles in different colours (including white, grey and black), which I can bring to the camera conversion company for reflectance measurement using a spectrometer. I won't be able to have these targets in each photo, but I was thinking of taking a photo of them every 50 shots or so. Do you think it would be sufficient?

2) Regarding white balance - still not sure if it's better to keep using the same WB on all days, or set it manually every day (e.g. on my red tile). I hope that calibration using my reflectance targets and adjusting the photos later in post-processing will solve the problem.

3) Regarding WB data in RAW files - I'm converting my ORF images to DNG and then processing them in Matlab where I can get (and change, if needed) the channel multipliers.

4) Regarding the contamination of the red channel with NIR - I bought two lens filters - a Hoya R72 (infrared) and a Hoya UV+IR cutoff. Each time I take a measurement of my reflectance targets, I'm planning to put the camera on the tripod and take three pictures: - with no filter on the lens (R: red and near infrared, B: near infrared), - with Hoya R72 filter (R: near infrared, B: near infrared), - with Hoya UV+IR cutoff (R: red, B: little bit of red?).

My thinking is that by comparing the channel values for different targets in these three pictures should help me estimate the contamination of the red channel with NIR - at least in case of my targets. But is the contamination the same when taking a photo of a plant, the reflectance of which is so different in red and near infrared? I could of course put a plant or two next to the tiles when taking the 'reference' images, but would one plant be representative of both healthy and stressed plants? Any insight would be most welcome...

Is this a question? Click here to post it to the Questions page.

1. One photo every 50 shots might be sufficient. It depends on how much the light changes during that time (5 minutes?, an hour?, a day?). You might want the calibration targets in the photo every time the light changes.
2. If you are using calibration targets and adjusting the values in each channel, then white balance is probably irrelevant.
3. The ORF images (Olympus Raw Format) probably save white balance data which can be applied later if desired. DNG image files (Digital Negative) also store white balance information which can be applied later. I don't know if your conversion from ORF to DNG will transfer the white balance information. MatLab can probably apply any white balance settings you want, so there is no need to set a custom white balance before you take the photos. More importantly, see number 2 above.
4. This is a very clever way to solve this problem. Make sure you capture raw image data and get it into MatLab without applying any white balance or gamma correction. All of the photos should be taken with the same exposure settings (shutter speed, aperture, ISO) which might be a challenge. The image with the R72 will tell you how much NIR is captured in the red channel. The IR cut filter (or narrow band red filter) will tell you how much red is captured in the red channel. You will then have an estimate of the proportion of Red:NIR captured in the red channel of your NDVI photos.

Will this proportion be the same for photos of a calibration target as it will for photos of foliage? I don't know, but I bet you can figure out how to learn the answer.

Let us know what you find for the NIR:Red proportion in the red channel.

Chris

Is this a question? Click here to post it to the Questions page.

Hi Chris, I made a simple test earlier today, so far only managed to look at the JPEG file data (I have an old code for reading DNG into Matlab (based on Rob Sumner's detailed guide, you can find a pdf online), but I need to fix a few things in it first. The results below are solely based on JPEG files and thus certainly introduce some error.

I took three photos, all with the same settings (ISO, aperture and shutter speed, white balance), with: 1) no lens filter (thus only the Midopt DB660/850 in camera) 2) Hoya R72 lens filter (blocking visible light) 3) Hoya IR cutoff filter (blocking near infrared)

Image 1 was a little overexposed and Image 2 was a little underexposed, which was unavoidable if I wanted to keep the settings unchanged.

As reference targets, I used 10 different ceramic tiles as well as dry and wet grass and three leaves. The camera was on a tripod.

Unfortunately, when taking the photos I forgot about white balance, which was set on the red tile a few days earlier. The same WB was used in all photos.

I imported the JPEG images into Matlab and calculated the mean pixel value in all three channels within 17 polygons representing my different targets (tiles, grass and leaves). The photos as well as a simple spreadsheet with the obtained values are here:

Results in a nutshell:

At least with the white balance set on a red tile, the red channel pixel values in Image 1 (camera only) are very similar to those in Image 3 (with IR cutoff lens filter), meaning a very low contamination with NIR of the red channel, regardless of target (tiles or plants).

Similarly, blue pixel values are very similar between Image 1 (camera only) and Image 2 (R72 filter), meaning a low contamination of the blue (NIR) channel with red light (particularly in case of plants; the difference was a little higher for some of the tiles).

The green channel seems to be most messy.

I'll be happy to write more (perhaps in a separate thread) once I have looked at the raw files... which may take some time as I can't figure out how to omit applying (or undo if applied automatically) white balancing.

Is this a question? Click here to post it to the Questions page.