Jaenicke, H.Borgo, R.Mason, J. S. D.Chen, M.2015-02-232015-02-2320101467-8659https://doi.org/10.1111/j.1467-8659.2009.01605.xSound is an integral part of most movies and videos. In many situations, viewers of a video are unable to hear the sound track, for example, when watching it in a fast forward mode, viewing it by hearing-impaired viewers or when the plot is given as a storyboard. In this paper, we present an automated visualization solution to such problems. The system first detects the common components (such as music, speech, rain, explosions, and so on) from a sound track, then maps them to a collection of programmable visual metaphors, and generates a composite visualization. This form of sound visualization, which is referred to as SoundRiver, can be also used to augment various forms of video abstraction and annotated key frames and to enhance graphical user interfaces for video handling software. The SoundRiver conveys more semantic information to the viewer than traditional graphical representations of sound illustration, such as phonoautographs, spectrograms or artistic audiovisual animations.SoundRiver: Semantically-Rich Sound Illustration10.1111/j.1467-8659.2009.01605.x357-366