Pederiva, Marcelo EduardoMartino, José Mario DeZimmer, AlessandroSauvage, BasileHasic-Telalovic, Jasminka2022-04-222022-04-222022978-3-03868-171-71017-4656https://doi.org/10.2312/egp.20221006https://diglib.eg.org:443/handle/10.2312/egp20221006Autonomous Vehicles became every day closer to becoming a reality in ground transportation. Computational advancement has enabled powerful methods to process large amounts of data required to drive on streets safely. The fusion of multiple sensors presented in the vehicle allows building accurate world models to improve autonomous vehicles' navigation. Among the current techniques, the fusion of LIDAR, RADAR, and Camera data by Neural Networks has shown significant improvement in object detection and geometry and dynamic behavior estimation. Main methods propose using parallel networks to fuse the sensors' measurement, increasing complexity and demand for computational resources. The fusion of the data using a single neural network is still an open question and the project's main focus. The aim is to develop a single neural network architecture to fuse the three types of sensors and evaluate and compare the resulting approach with multi-neural network proposals.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies --> Object identification; Object detection; Applied computing --> TransportationComputing methodologiesObject identificationObject detectionApplied computingTransportationMultimodal Early Raw Data Fusion for Environment Sensing in Automotive Applications10.2312/egp.2022100615-162 pages