Yoshida, KentaroInoue, SekiMakino, YasutoshiShinoda, HiroyukiRobert W. Lindeman and Gerd Bruder and Daisuke Iwai2017-11-212017-11-212017978-3-03868-038-31727-530Xhttps://doi.org/10.2312/egve.20171336https://diglib.eg.org:443/handle/10.2312/egve20171336Along with advances in video technology in recent years, there is an increasing need for adding tactile sensation to it. Many researches on models for estimating appropriate tactile information from images and sounds contained in videos have been reported. In this paper, we propose a method named VibVid that uses machine learning for estimating the tactile signal from video with audio that can deal with the kind of video where video and tactile information are not so obviously related. As an example, we evaluated by estimating and imparting the vibration transmitted to the tennis racket from the first-person view video of tennis. As a result, the waveform generated by VibVid was almost in line with the actual vibration waveform. Then we conducted a subject experiment including 20 participants, and it showed good results in four evaluation criteria of harmony, fun, immersiveness, and realism etc.Humancentered computingHaptic devicesTheory of computationModels of learningHardwareHaptic devicesVibVid: VIBration Estimation from VIDeo by using Neural Network10.2312/egve.2017133637-44