Chan, Jacky C. P.Tang, Jeff K. T.Leung, HowardB. Levy, X. Tong, and K. Yin2015-02-282015-02-2820131467-8659https://doi.org/10.1111/cgf.12210Existing synthesis methods for closely interacting virtual characters relied on user-specified constraints such as the reaching positions and the distance between body parts. In this paper, we present a novel method for synthesizing new interacting motion by composing two existing interacting motion samples without the need to specify the constraints manually. Our method automatically detects the type of interactions contained in the inputs and determines a suitable timing for the interaction composition by analyzing the spacetime relationships of the input characters. To preserve the features of the inputs in the synthesized interaction, the two inputs will be aligned and normalized according to the relative distance and orientation of the characters from the inputs. With a linear optimization method, the output is the optimal solution to preserve the close interaction of two characters and the local details of individual character behavior. The output animations demonstrated that our method is able to create interactions of new styles that combine the characteristics of the original inputs.I.3.7 [Computer Graphics]Three Dimensional Graphics and RealismAnimationSynthesizing Two-character Interactions by Merging Captured Interaction Samples with their Spacetime Relationships