# Question: Bad NDVI Results from Pi NOIR with Blue Filter

by nstarli |

NGB Photo acquired after white balance procedure

NDVI image using the infragram sandbox

NDVI using cfastie colormap

I am taking some images of maize plants using the Raspberry Pi NoIR camera using the blue filter that it is sold with from the rpi website. I have set the white balance (awb_gains) of the camera using the technique in the previous note of pointing the camera at deep blue origami paper and taking those awb settings then setting awb_mode to off and setting the gains to the previously adjusted gains settings using the origami paper. Above is the NGB image acquired (this looks pretty good, plant material is a yellowish-orange color). However, when I run this image through the infragram sandbox and my own NDVI processing code, I am getting bad results, shown below the NGB image.

To give some more information on the setup. We are taking the images indoors under flourescent lights (however we are next to a large sunny window so should be getting IR I think?) and the surface below the plant is dark gray carpet. At first I thought that we did not have enough IR light, however, if that were the case wouldn't all the NDVI values just be equal to 1 or -1? It looks like all the values are in the green and yellow range.

Sorry, I meant to include the cfastie colormap image as well:

Is this a question? Click here to post it to the Questions page.

I am not familiar with your white balance procedure so I don't know how it is affecting your results. If you can manually set the white balance, set it to exaggerate the brightness of the red channel (which will be used for NIR in NDVI computations).

Unless your plants are illuminated by sunlight or another source with sunlight's proportion of visible:NIR, you results will not look like traditional NDVI results. If a nearby window adds NIR light, it will also add even more visible light. If your illumination is dominated by fluorescent light (which has little NIR) your NDVI results will not be similar to traditional results.

The blue filter you are using blocks most red light, but some visible light will still be captured in the red channel. In the absence of NIR, when the red channel is used to represent NIR, the values will not be zero so NDVI will not compute to 1 or -1.

The Fastie colormap assigns gray to all values less than 0.1. There is no gray in your NDVI results, so all of your NDVI values are higher than 0.1 (assuming the authentic Fastie colormap was used).

Many of the non-plant parts of your scene have higher NDVI values (e.g., red) than the foliage. This suggests you have much adjusting to do.

Chris

Thanks for the reply Chris. I have read through pretty much every single one of your posts! So my white balance calibration is the simple one that you and Ned had originally proposed. Basically I point the camera at a deep blue surface and allow the auto white balance to settle. I then save those values, turn off the auto white balance, and set the gain values to those saved values. This in turn is doing exactly as you said (exaggerating the brightness in the red channel while decreasing gains in blue). I think I probably am getting some visible light in the red channel. I believe I remember in another post you or Ned had stated that you would subtract out something like 30% of the pixel values in the red channel if I recall correctly?

Fortunately we are eventually moving our automated robotic imaging system into our controlled phytotron which uses lighting to mimic natural sunlight as close as possible (I am a researcher at a large US research institution). Hopefully this will produce better results. I may also try using a red filter rather than blue as it seems that is what you guys prefer.

Honestly it is kind of hard to follow all of your guys processes as there is not really a final summary of the process, rather various different posts describing different methods that have evolved over time, so I am not sure if I am doing all of this properly. However I am really grateful for all the work you guys have done, it's very interesting! So far I have done the white balance adjustments. I guess the next step would be to try and get some reflectance trends to do pixel transformations in order to get a more rigorous NDVI calculation as stated on this post: https://publiclab.org/notes/cfastie/05-01-2016/calibration-cogitation

Do you know if I would be able to apply this process to the raspberry pi photos? Should I try to get the raw bayer photos for this process? I also was wondering if you all had decided on any standard calibration targets? I found black, white, and 18% reflectance grey, white-balance standards on amazon. Would those three be enough data points to create my trend? It doesn't seem like much work has been done on this since about 2016 (or at least that seems to be when the new posts stopped), is this a project that is still actively being developed?

Is this a question? Click here to post it to the Questions page.

I don't know for sure whether your protocol with the RPi camera (use automatic white balance with blue paper) and my protocol with a Powershot (use custom white balance with blue paper) will produce the same results.

Do you know whether the lights in your phytotron include NIR? NIR is not used by plants, so it would just consume more electricity and produce heat in the phytotron. My guess is that there is not as much NIR as in sunlight, but you will have to determine this. If there is not much NIR, you can just turn on some NIR LEDs when you take the photos -- the plants reflect NIR so they won't even know the LEDs are on. The proportion of VIS to NIR should be similar to sunlight for your computed NDVI to be comparable to legacy values. If it is not, your results can be meaningful for internal comparisons as long as that VIS:NIR ratio is constant (for all photos).

You are correct that this site is not designed for ease of access to the information here. The staff has long eschewed curation and instead tries to organize material algorithmically. This generally fails because few people interact with most content (few comments or "likes"), few authors tag their work in helpful ways, and searching and sorting tools are typically broken.

The controlled environment you hope to work in is perfect for doing real calibration. Calibration targets could be placed in each photo without too much effort. You will want enough targets to convince yourself that the calibration curve is robust, so start with four or five and then determine whether just two or three targets provide a good approximation of the curve. Targets must be well exposed in the photos (in the channels used for both VIS and NIR) so they probably should not be white or black (or the equivalent in NIR). If the DN for the darkest target is 0 or the DN for the brightest target is 255, then you don't know with much precision how much light of that wavelength is being reflected. Since all of the targets (including the darkest and lightest) must be well exposed in each individual photo, none of the targets should be very dark or very light.

You will have to know the reflectance of each target for the range of wavelengths used for both VIS and NIR. A common photographic gray card of 18% reflectance might not reflect 18% of NIR, so you have to measure that value yourself. If you have the equipment to measure that (some type of spectrometer/photometer) you can characterize any material and will not have to buy targets. The nice thing about full spectrum commercial targets (rated for VIS and NIR) is that they reflect the same proportion of the incoming light at any wavelength in the rated range. So you do not have to measure any reflectance and do not have to know exactly what range of wavelengths your camera is recording in each channel. Both of those things are hard enough that it can be worthwhile buying expensive calibration targets.

Because you are working in a controlled environment you might want to consider using a two camera system. One camera can have a precise red filter (passes only a narrow range of red light) and the other a precise NIR filter. Such a system completely eliminates cross contamination and can be easier to compare to legacy NDVI (which uses narrow wavelength ranges). Such a system will still require calibration so you know what the real relationship is between the VIS and NIR light being reflected from foliage.

Chris

Is this a question? Click here to post it to the Questions page.

Thanks for the detailed response Chris, this is super helpful! I really appreciate the feedback.

So I have actually been provided the full characteristic spectrum of the phytotron below:

The solid line shows when both the fourescent and incandescent lights being used (we will be using both). So from this chart we could actually get the exact ratios of NIR and blue light in our chamber. Below is the datasheet for the Rosco blue 2007 filter:

So according to this we should be getting about 10% transmission of red light, does that mean I should subtract 10% of each pixel value in the red channel in order to remove red noise in the NIR channel? Can I use the information in these charts for anything else?

I was able to find the quantum efficiency charts for the visible spectrum for the Raspberry Pi Camera however I cannot seem to find any information on the NIR spectrum for the NoIR camera. If I had that information would I be able to use that and avoid the calibration process? Isn't the calibration process basically finding a transformation for the sensitivity of the camera vs the spectral characteristics of our lighting or am I misunderstanding the purpose of calibrating to known reflectances?

As far as what you said about using two cameras. We will actually be using two cameras anyways. One of the cameras we will need the full color spectrum though so would not be able to use a red bandpass filter. We are trying to do water/nutrient stress prediction using both color and IR characteristics. Could we simply use an NIR cutoff filter on the Pi NoIR and use the red channel from the RGB camera to find these calculations?

Lastly, we are actually looking into ways to possibly avoid using NDVI. We may take a more data driven approach to determine a custom transformation of the NGB images based on differences between the healthy and water stressed plants. That would eliminate all the issues with calibration and white balancing needed for accurate NDVI. In the end we only want to use the data to do water stress prediction using machine learning, so NDVI may not even be needed and we could directly infer that from the NGB (or just NIR) data.

Is this a question? Click here to post it to the Questions page.

>>So according to this we should be getting about 10% transmission of red light, does that mean I should subtract 10% of each pixel value in the red channel in order to remove red noise in the NIR channel?

The correction for VIS contamination of the NIR channel (the red channel in this case) depends on the proportion of VIS:NIR being captured in that channel (if the VIS:NIR captured is 20:100 you would reduce the value by 20%). According to Rosco's data, ~10% of incoming red light passes through that filter, but you don't know how much NIR passes through that filter (the Rosco graph stops at 750 nm). You can assume that most of the red passing the filter will be captured by the sensor, but you don't how much of the NIR will be captured. So you don't really have enough information to make that correction. In the case of your phytotron, you know the wavelength of NIR light, but you don't know either how much of that is transmitted by the Rosco filter or how much of that is captured by the sensor (the Bayer filter might block some, and the sensor itself is not very sensitive to 1010 nm NIR).

Having the spectral response of the Pi NoIR camera will help (I thought someone had posted it here, but it could be tough to find it unless someone remembers where it is). Then you can approximate the proportion of incoming red (of some nm range) to incoming NIR (of some nm range) which should be captured by the sensor. There is a discussion of how to do this here.

The calibration process translates brightness in a certain wavelength range (recorded by the camera) into radiance (a measure of the energy of the light) in that wavelength range (see Figure 1 here).

>> Could we simply use an NIR cutoff filter on the Pi NoIR and use the red channel from the RGB camera to find these calculations?

Yes, this could provide good VIS (red) and NIR data. The Pi NoIR camera will not be very sensitive to 1010 nm NIR light, so even though foliage is reflecting lots of NIR and little red, the red channel (of the RGB camera) might be brighter than any NIR channel (of the filtered NoIR camera). So calibration is still required. Calibration corrects for the camera's inherent undercapture of NIR energy.

Using two cameras also requires rectifying two photos rather precisely. This is easy, but only if the two lenses are very close together so parallax is minimized. The closer the lenses are to each other and the farther they are from the corn plants, the better.

Chris

Is this a question? Click here to post it to the Questions page.