EGU25-12675, updated on 15 Mar 2025
https://doi.org/10.5194/egusphere-egu25-12675
EGU General Assembly 2025
© Author(s) 2025. This work is distributed under
the Creative Commons Attribution 4.0 License.
Oral | Friday, 02 May, 11:25–11:35 (CEST)
 
Room -2.15
Automatic Detection of Cimiciato Defect in Hazelnuts Using Deep Learning and X-ray Radiography
Antonio Gaetano Napolitano, Giacomo Mele, Laura Gargiulo, Matteo Giaccone, and Andrea Vitale
Antonio Gaetano Napolitano et al.
  • CNR, Isafom, Portici (NA), Italy (antoniogaetano90.napolitano@gmail.com)

Hazelnuts are a significant crop, with global production exceeding 1.25 million tons by 2023 (INC, 2023). Quality is threatened by biotic agents, including insects causing the cimiciato defect. This defect, from insect bites during fruit growth, results in off-flavor, tissue alterations, and lipid oxidation (De Benedetta et al., 2023). Damage can be external or internal (hidden cimiciato). Industrial quality standards often exceed official regulations, making effective selection crucial. Traditional visual inspection is time-consuming and subjective. Non-destructive methods like NIR and NMR have potential but limited applicability. Deep Learning (DL) has revolutionized image classification, proving effective in agriculture, including disease and pest management (Mohanty et al., 2016; Dhaka et al., 2021; Meena et al., 2023). This study explores Deep Learning (DL) for automated detection of the cimiciato defect in hazelnuts using X-ray radiographs. Cimiciato, caused by insect feeding, degrades hazelnut quality, requiring product selection. Traditional methods are time-consuming and subjective. We propose a Convolutional Neural Network (CNN) model trained on X-ray images to classify hazelnuts as healthy or infected. Results demonstrate the model's effectiveness, offering a non-destructive, automated quality control solution.
Radiographs were acquired using a cone-beam micro-tomograph. Each hazelnut was positioned on a rotating stage. A CNN model was used for classification. CNNs effectively extract features from images. Convolutional layers apply filters to identify features; pooling layers reduce data dimensionality; fully connected layers combine features for classification. The Inception-ResNet-V2 architecture was chosen, combining Inception modules and residual connections (Szegedy et al., 2017). The model was trained (128 image batch size, 0.001 learning rate, 30 epochs), comparing SGD, ADAM, and RMSP optimizers. Images were pre-processed: resizing, pixel normalization, and data augmentation. 
The test dataset evaluated the trained network. SGD, ADAM, and RMSP yielded similar results. Confusion matrices visualize performance. ADAM performed best, but all achieved good results, especially for cimiciato detection. 

Keywords: Cimiciato defect, Hazelnut, Deep Learning, X-ray radiography, CNN

How to cite: Napolitano, A. G., Mele, G., Gargiulo, L., Giaccone, M., and Vitale, A.: Automatic Detection of Cimiciato Defect in Hazelnuts Using Deep Learning and X-ray Radiography, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-12675, https://doi.org/10.5194/egusphere-egu25-12675, 2025.