Mostrar el registro sencillo del ítem

dc.contributor.authorMetuarea, Herearii
dc.contributor.authorLaurens, François
dc.contributor.authorGuerra, Walter
dc.contributor.authorLozano, Lidia
dc.contributor.authorPatocchi, Andrea
dc.contributor.authorVan Hoye, Shauny
dc.contributor.authorDutagaci, Helin
dc.contributor.authorLabrosse, Jeremy
dc.contributor.authorPejman, Rasti
dc.contributor.authorRousseau, David
dc.contributor.otherProducció Vegetalca
dc.date.accessioned2026-01-31T16:11:10Z
dc.date.available2026-01-31T16:11:10Z
dc.date.issued2025-07-31
dc.identifier.issn1424-8220ca
dc.identifier.urihttp://hdl.handle.net/20.500.12327/5037
dc.description.abstractComputer vision is of wide interest to perform the phenotyping of horticultural crops such as apple trees at high throughput. In orchards specially constructed for variety testing or breeding programs, computer vision tools should be able to extract phenotypical informa tion form each tree separately. We focus on segmenting individual apple trees as the main task in this context. Segmenting individual apple trees in dense orchard rows is challenging because of the complexity of outdoor illumination and intertwined branches. Traditional methods rely on supervised learning, which requires a large amount of annotated data. In this study, we explore an alternative approach using prompt engineering with the Segment Anything Model and its variants in a zero-shot setting. Specifically, we first detect the trunk and then position a prompt (five points in a diamond shape) located above the detected trunk to feed to the Segment Anything Model. We evaluate our method on the apple REFPOP, a new large-scale European apple tree dataset and on another publicly available dataset. On these datasets, our trunk detector, which utilizes a trained YOLOv11 model, achieves a good detection rate of 97% based on the prompt located above the detected trunk, achieving a Dice score of 70% without training on the REFPOP dataset and 84% without training on the publicly available dataset.We demonstrate that our method equals or even outperforms purely supervised segmentation approaches or non-prompted foundation models. These results underscore the potential of foundational models guided by well designed prompts as scalable and annotation-efficient solutions for plant segmentation in complex agricultural environments.ca
dc.description.sponsorshipThis research was funded by the European Union’s Horizon Europe Research and Innovation Programme under PHENET project, Grant Agreement No. 101094587.ca
dc.format.extent21ca
dc.language.isoengca
dc.publisherMDPIca
dc.relation.ispartofSensorsca
dc.rightsAttribution 4.0 Internationalca
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.titleIndividual Segmentation of Intertwined Apple Trees in a Row via Prompt Engineeringca
dc.typeinfo:eu-repo/semantics/articleca
dc.description.versioninfo:eu-repo/semantics/publishedVersionca
dc.rights.accessLevelinfo:eu-repo/semantics/openAccess
dc.embargo.termscapca
dc.relation.projectIDEC/HE/101094587/EU/Tools and methods for extended plant PHENotyping and EnviroTyping services of European Research Infrastructures/PHENETca
dc.subject.udc633ca
dc.identifier.doihttp://dx.doi.org/10.3390/s25154721ca
dc.contributor.groupFructiculturaca


Ficheros en el ítem

 

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Attribution 4.0 International
Excepto si se señala otra cosa, la licencia del ítem se describe como http://creativecommons.org/licenses/by/4.0/
Compartir en TwitterCompartir en LinkedinCompartir en FacebookCompartir en TelegramCompartir en WhatsappImprimir