Show simple item record

dc.contributor.authorPederiva, Marcelo Eduardoen_US
dc.contributor.authorMartino, José Mario Deen_US
dc.contributor.authorZimmer, Alessandroen_US
dc.contributor.editorSauvage, Basileen_US
dc.contributor.editorHasic-Telalovic, Jasminkaen_US
dc.description.abstractAutonomous Vehicles became every day closer to becoming a reality in ground transportation. Computational advancement has enabled powerful methods to process large amounts of data required to drive on streets safely. The fusion of multiple sensors presented in the vehicle allows building accurate world models to improve autonomous vehicles' navigation. Among the current techniques, the fusion of LIDAR, RADAR, and Camera data by Neural Networks has shown significant improvement in object detection and geometry and dynamic behavior estimation. Main methods propose using parallel networks to fuse the sensors' measurement, increasing complexity and demand for computational resources. The fusion of the data using a single neural network is still an open question and the project's main focus. The aim is to develop a single neural network architecture to fuse the three types of sensors and evaluate and compare the resulting approach with multi-neural network proposals.en_US
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.subjectCCS Concepts: Computing methodologies --> Object identification; Object detection; Applied computing --> Transportation
dc.subjectComputing methodologies
dc.subjectObject identification
dc.subjectObject detection
dc.subjectApplied computing
dc.titleMultimodal Early Raw Data Fusion for Environment Sensing in Automotive Applicationsen_US
dc.description.seriesinformationEurographics 2022 - Posters
dc.identifier.pages2 pages

Files in this item


This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International License
Except where otherwise noted, this item's license is described as Attribution 4.0 International License