EGU General Assembly 2020
© Author(s) 2020. This work is distributed under
the Creative Commons Attribution 4.0 License.

Automatizing MiniRhizotron Image Acquisition

Boris Rewald1, Naftali Lazarovitch2, Pavel Baykalov1,3, Ofer Hadar4, Stefan Mayer3, Gernot Bodner5, and Liaqat Seehra3
Boris Rewald et al.
  • 1Forest Ecology, University of Natural Resources and Life Sciences, Vienna, Austria (
  • 2French Associates Institute for Agriculture and Biotechnology of Drylands, Ben-Gurion University of the Negev, Beer-Sheva, Israel
  • 3Vienna Scientific Instruments GmbH, Alland, Austria (
  • 4Department of Communication Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
  • 5Division of Agronomy, University of Natural Resources and Life Sciences, Vienna, Austria

Minirhizotron (MR) imaging systems are key instruments to study the hidden half of plants and ecosystems, i.e. roots, mycorrhiza and their interactions with pathogens, fauna etc. in the rhizosphere. However, despite scarce data on the ‘hidden half’ of plants and ecosystems, e.g. needed for better understanding species’ ecophysiology, breeding resource efficient crops or determining soil C input, the technological advances remained yet limited.

We designed and build an automatic, modular MR camera system for permanent operation in situ, combining state-of-the-art imaging sensors (UHD VIS and certain near infrared (NIR) wavebands) with mechatronic automation to allow for effective and precise imaging of MR tubes. The system consists of a MR camera ‘carrier system’ (i.e. for camera positioning, scheduling and processing of images, interconnectivity) for 7 cm diameter, up to 2 m long, MR tubes installed in situ (fields to forests), and two interchangeable camera modules to be used with the carrier system. The first module is a cost-effective UHD RGB module and the second module combines VIS and selected multispectral (NIR) wavebands--potentially allowing for advanced image processing such as root classification (age, branching order etc.) and approximation of selected soil properties (soil water content, C contents etc.).

The presented technology has the potential to benefit society both indirectly via improving the capacity of the research community to study root and rhizosphere systems (e.g. in a C budgeting, or plant breeding context), and is, beside automatic image analysis, a prerequisite for making root development information available to stakeholders in real time (e.g. to farmers for precision irrigation). Additional benefits of an automatic MR system such as precise stitching (for creating ‘panoramic’ images) and creation of ‘super resolution’ images are discussed.

How to cite: Rewald, B., Lazarovitch, N., Baykalov, P., Hadar, O., Mayer, S., Bodner, G., and Seehra, L.: Automatizing MiniRhizotron Image Acquisition, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21832,, 2020

Comments on the presentation

AC: Author Comment | CC: Community Comment | Report abuse

Presentation version 1 – uploaded on 01 May 2020
  • CC1: Comment on EGU2020-21832, Sarah Garré, 04 May 2020

    Dear Boris, thanks for this contribution. Working with minirhizotrons is indeed a time-consuming activity, so any improvement is welcome. I do think much is to be gained from coupling the automated acquisition to an automated image analysis algorithm and a database. How do you see this?

    • AC1: Reply to CC1, Boris Rewald, 05 May 2020

      Dear Sarah, I couldn't agree more with you. Together with my colleagues from the BGU in Israel, we are already working on neural network based algorithms. See Adam Soffers presentation in this session :) However, as Google needed about 14 million annotated images to 95% correctly distinguish different vehicles,  I fear we just don't have enough annotated root images in highly different soils yet. Thus all the progess I see recent is likely only usable under very narrow experimental condition... We should notify either Bill and Melinda or get an H2020 on the topic! Cheers