Magnusson, Jannes S.Hilsmann, AnnaEisert, PeterBabaei, VahidSkouras, Melina2023-05-032023-05-032023978-3-03868-209-71017-4656https://doi.org/10.2312/egs.20231002https://diglib.eg.org:443/handle/10.2312/egs20231002This work proposes a novel concept for tree and plant reconstruction by directly inferring a Lindenmayer-System (L-System) word representation from image data in an image captioning approach. We train a model end-to-end which is able to translate given images into L-System words as a description of the displayed tree. To prove this concept, we demonstrate the applicability on 2D tree topologies. Transferred to real image data, this novel idea could lead to more efficient, accurate and semantically meaningful tree and plant reconstruction without using error-prone point cloud extraction, and other processes usually utilized in tree reconstruction. Furthermore, this approach bypasses the need for a predefined L-System grammar and enables species-specific L-System inference without biological knowledge.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Shape representations; Reconstruction; Shape analysis; Neural networksComputing methodologies → Shape representationsReconstructionShape analysisNeural networksTowards L-System Captioning for Tree Reconstruction10.2312/egs.202310029-124 pages