EGU26-22080, updated on 14 Mar 2026
https://doi.org/10.5194/egusphere-egu26-22080
EGU General Assembly 2026
© Author(s) 2026. This work is distributed under
the Creative Commons Attribution 4.0 License.
Oral | Thursday, 07 May, 09:45–09:55 (CEST)
 
Room 2.23
AI-Driven Insights from Multimodal Data for Optimized Soybean Growth Monitoring 
Sushma Katari and Sami Khanal
Sushma Katari and Sami Khanal
  • The Ohio State University, Department of Food, Agricultural, and Biological Engineering, United States of America (katari.5@osu.edu)

Monitoring soybean growth provides critical insights for farmers, enabling them to closely track crop development and implement proactive management practices that ultimately enhance yields. Inefficient management and excessive chemical use not only reduce efficiency but also result in significant environmental consequences, including water contamination and increased greenhouse gas (GHG) emissions. These environmental impacts degrade soil health, disrupt weather patterns, and contribute to issues such as soil nutrient depletion and irregular precipitation, all of which have direct, adverse effects on agricultural productivity. Integrating various sensor data, such as satellite and small Unmanned Aerial System (sUAS) data, with machine learning (ML) offers a pathway to precise soybean growth monitoring. This pathway enables farmers to make data-driven decisions that reduce the need for field scouting while improving resource efficiency. Though recent studies have begun to explore field-level, precise growth monitoring using sUAS and satellite imagery, in-depth research on their integration strategies is necessary to develop practical, cost-effective methods for accurately estimating soybean phenological stages. In this study, a comprehensive analysis of soybean growth is conducted across early vegetative to reproductive stages using ML and multi-sensor methods. The selected soybean fields are located at three Ohio State Agricultural Research Stations, which are geographically dispersed across Ohio, USA. Using fixed-wing Wingtra sUAS, high-resolution optical images of soybean fields were collected from 2023 to 2025. To determine whether simple machine learning or complex deep learning methods perform better, multiple combinations of these models with sUAS and satellite are trained, and their performance is evaluated. Best model performance was observed with the Vision Transformer (ViT) model on sUAS images, which detected soybean growth stages with an average Root Mean Squared Error (RMSE) of 0.7. Poor performance was observed with the Random Forest model on open-source Sentinel-2 images, with an RMSE of 3.1. Upon closer investigation of good-performing and poor-performing growth stages through sUAS and satellite, it was observed that early growth stages performed really well only with sUAS data (RMSE<1), while for later reproductive stages (>R2), satellite performed relatively well with RMSE<1. This indicates that using sUAS during the early growth phase and satellites during the late growth phase can be a promising approach for the future.  This strategy enables farmers to make data-driven decisions that optimize growth monitoring and resource use, reduce waste, and minimize environmental impacts.

How to cite: Katari, S. and Khanal, S.: AI-Driven Insights from Multimodal Data for Optimized Soybean Growth Monitoring , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-22080, https://doi.org/10.5194/egusphere-egu26-22080, 2026.