# Raspberry Pi Microscope Image Stitching

by MaggPi | 23 Mar 05:52

Raspberry Pi Microscope Image Stitching – Trial observations

Abstract: The note considers microscope computer vision image stitching using a Raspberry Pi camera and OpenCV software. An example real time stitching technique is demonstrated and its limitations are discussed.
Stitching demo is available at: https://youtu.be/xIEPgeUOURQ

Introduction:

• The purpose of this note is to demonstrate techniques suitable for ‘live’ web based stitching with Public Lab Microscopes.
• Image stitching combines several overlapping images into a single image. A typical example of image stitching is the panoramic display feature available on most mobile phone camera applications.
• The ability to quickly combine several microscope images into a single picture has several potential benefits: 1) Permits the user to identify and locate features at high resolution with expanded field of view. 2) Reduces the number of microscope objectives needed to survey the image. A single high resolution scan can be obtained which can be then be (re)displayed at lower resolutions as needed. 3) Provides a single image rather than multiple images for easier data storage and image processing.
• Current microscope image stitching applications include, open source ImageJ/MIST and commercial options Microvisioneer . While these applications provide value for a wide variety of specialized applications, this note considers image stitching designs for low cost microscope open hardware, open source software and a ‘point and click’ real time display user interface.
• Public Lab GSOC 2019 proposal ideas describes ‘live microscope stitching’ as a future upgrade to MapKnitter.

System description: Major System components are described below:

• Raspberry PI 3B+ computer: The computer controls the camera and display. The USB port is used to power the microscope light when the Public lab stage is used.
• OpenCV - Open CV ‘stitcher ’ class was used to merge images (pipeline picture below). Stitcher combines multiple functions to enable single command image stitching. Stitching in this context refers to identifying common image key points which are then used to overlap the pictures.

• Display - Microscope images are manually sorted (left ‘live’ video/right stitched image) as they are displayed on a monitor. The stitched image is reduced in size (100%, 50%, 30%) as additional frames are added.

Observations:

• Image displacement - Successful stitching depends on sufficient overlap between consecutive images. For microscopic image stitching, fine (10-100 micron) adjustments are recommended. Without a caliber controlled stage, position changes may be too coarse and no overlap is available for successful stitching. A caliber stage (used in the video example) is probably the easiest adjustment method but adds another component. Other solutions to consider are some type of positioning guide or a computer function that measure overlap (homographic indicator) status.

• Image degradation - The current software accurately stitches successive images but also has a shortcoming. As each consecutive image is added the ‘stitched’ image begins to degrade. This may be due to the compression loss each time the image is (re)stored. Potential solutions are to adjust stitcher class functions, storing images in a different format (than jpg) or develop an image mask that protects the stitched image.

• Image size and memory - Image size increase with each stitched image. For the example provided in the video, the first image size is 640 x 480 (210kiB) and the last image size is 1384x2039 (267kiB). As discussed above, using jpg format helps manage memory but is not appropriate when the image is (re)stored several times. Ideally, the stitching technique should be able to process a high resolution mode Raspberry Pi camera image. For high resolution images, the range would be something like 3280x2464 (4.4MiB) for the first image to several hundred megabytes (depending on number of stitches and compression). Large stitched image sizes also provide several challenges such as increased processing time and storing/viewing the final stitched image.

• Microscope vs Airborne Imaging - Public lab applications such as MapKnitter permit the ability to manual align multiple pictures and create a single image mosaic. Adapting MapKnitter for microscope image stitching may require several program modifications (or a new application) that consider differences between airborne vs microscope imaging. Differences include: 1) MapKnitter (kite/balloon) airborne images typically have random orientation and random perspective while microscope images have random orientation and uniform perspective, 2) airborne images may or may not have image overlap while microscope images can be controlled to guarantee image overlap and 3) MapKnitter currently is designed to overlay prestored images against a map grid vs a ‘real time’ microscope stitched image that permits user alignment feedback.

• Live stitching - The stitching method cycles between a ‘live’ mode for selecting the next part of the image and then an image processing mode that calculates the stitched imaged. This approach was selected since it provides a way to select ‘good’ images while balancing computer resources. Web based stitching will also need a way to manage image processing vs real time viewing demands. Potential concerns are image lag, motion blur, processing required to reject identical frames/out-of-focus frames and stitching sequence failure (if no overlap).

Summary An interrupted image loop using opencv stitcher class demonstrated consecutive frame microscope image stitching. This note discusses several design considerations for a web based application. I could work more on the current approach but would like comments if the proposed technique is the best direction…..

Hi @maggpi! Great to see you again! We are really interested in solving this in JavaScript, so I wonder if you would be able to explore whether this could be factored into a JavaScript module using OpenCV.js?

https://github.com/publiclab/image-sequencer/issues/237 covers some of the issues. And there is some discussion I know you're aware of in terms of the structure of the code and functions in @rexagod's proposal starting here: https://publiclab.org/notes/rexagod/03-11-2019/gsoc-proposal-mapknitter-orb-descriptor-w-auto-stitching-pattern-training-and-live-video-support-and-ldi-revamp-major-ui-enhancements#c22114

Thanks so much for your proposal!!!

Is this a question? Click here to post it to the Questions page.

@warren, thx for the comment. I checked with opencv, stitcher is not currently included in opencv.js, a list of the current wrappings is at:

https://github.com/opencv/opencv/blob/master/modules/js/src/embindgen.py#L82-L171

So at this point this technique is possible with a raspberry pi /python/opencv but you would have to add the module to opencv.js

And, just wanted to say, this is super impressive!!!

Hi @rexagod @maggpi I wondered if you'd be interested in collaborating a bit on this use of the matcher-core library -- @rexagod, i don't yet see your PR on image-sequencer but once merged and documented, perhaps @maggpi could try to put it to use to replicate this amazing proof of concept above!

https://github.com/publiclab/matcher-core

https://github.com/publiclab/image-sequencer/pulls (please paste in the link here @rexagod once you open a PR!)

I could see this as a unique Sequencer, or as an application that /uses/ the sequencer to create a single canvas containing the composite!
@rexagod, can you point us at your registration code, which I think you've called projection since it also does rotation?