Rausch, DominikHentschel, BerndKuhlen, Torsten W.Fabrice Jaillet and Florence Zara and Gabriel Zachmann2015-11-042015-11-042015978-3-905674-98-9https://doi.org/10.2312/vriphys.20151335Modal sound synthesis is a promising approach for real-time physically-based sound synthesis. A modal analysis is used to compute characteristic vibration modes from the geometry and material properties of scene objects. These modes allow an efficient sound synthesis at run-time, but the analysis is computationally expensive and thus typically computed in a pre-processing step. In interactive applications, however, objects may be created or modified at run-time. Unless the new shapes are known upfront, the modal data cannot be pre-computed and thus a modal analysis has to be performed at run-time. In this paper, we present a system to compute modal sound data at run-time for interactive applications. We evaluate the computational requirements of the modal analysis to determine the computation time for objects of different complexity. Based on these limits, we propose using different levels-of-detail for the modal analysis, using different geometric approximations that trade speed for accuracy, and evaluate the errors introduced by lower-resolution results. Additionally, we present an asynchronous architecture to distribute and prioritize modal analysis computations.H.5.1 [Information Interfaces and Presentation]Multimedia Information SystemsAudio Output H.5.5 [Information Interfaces and Presentation]Sound and Music ComputingSignal SynthesisLevel-of-Detail Modal Analysis for Real-time Sound Synthesis10.2312/vriphys.2015133561-70