6 results
Search Results
Now showing 1 - 6 of 6
Item Virtual Instrument Performances (VIP): A Comprehensive Review(The Eurographics Association and John Wiley & Sons Ltd., 2024) Kyriakou, Theodoros; Alvarez de la Campa Crespo, Merce; Panayiotou, Andreas; Chrysanthou, Yiorgos; Charalambous, Panayiotis; Aristidou, Andreas; Aristidou, Andreas; Macdonnell, RachelDriven by recent advancements in Extended Reality (XR), the hype around the Metaverse, and real-time computer graphics, the transformation of the performing arts, particularly in digitizing and visualizing musical experiences, is an ever-evolving landscape. This transformation offers significant potential in promoting inclusivity, fostering creativity, and enabling live performances in diverse settings. However, despite its immense potential, the field of Virtual Instrument Performances (VIP) has remained relatively unexplored due to numerous challenges. These challenges arise from the complex and multi-modal nature of musical instrument performances, the need for high precision motion capture under occlusions including the intricate interactions between a musician's body and fingers with instruments, the precise synchronization and seamless integration of various sensory modalities, accommodating variations in musicians' playing styles, facial expressions, and addressing instrumentspecific nuances. This comprehensive survey delves into the intersection of technology, innovation, and artistic expression in the domain of virtual instrument performances. It explores musical performance multi-modal databases and investigates a wide range of data acquisition methods, encompassing diverse motion capture techniques, facial expression recording, and various approaches for capturing audio and MIDI data (Musical Instrument Digital Interface). The survey also explores Music Information Retrieval (MIR) tasks, with a particular emphasis on the Musical Performance Analysis (MPA) field, and offers an overview of various works in the realm of Musical Instrument Performance Synthesis (MIPS), encompassing recent advancements in generative models. The ultimate aim of this survey is to unveil the technological limitations, initiate a dialogue about the current challenges, and propose promising avenues for future research at the intersection of technology and the arts.Item Treasurer report on audited accounts for 2023(2024-04-18) Chrysanthou, YiorgosAudited accounts for 2023Item Overcoming Challenges of Cycling Motion Capturing and Building a Comprehensive Dataset(The Eurographics Association, 2024) Kyriakou, Panayiotis; Kyriakou, Marios; Chrysanthou, Yiorgos; Pelechano, Nuria; Pettré, JulienThis article describes a methodology for capturing cyclist motion using motion capture (mocap) hardware. It also details the creation of a comprehensive dataset that will be publicly available. The methodology involves a modular system, and an innovative marker placement. The resulting dataset is utilized to create 3D visualizations and diverse data representations, shared in an online library for public access and collaborative research.Item LexiCrowd: A Learning Paradigm towards Text to Behaviour Parameters for Crowds(The Eurographics Association, 2024) Lemonari, Marilena; Andreou, Nefeli; Pelechano, Nuria; Charalambous, Panayiotis; Chrysanthou, Yiorgos; Pelechano, Nuria; Pettré, JulienCreating believable virtual crowds, controllable by high-level prompts, is essential to creators for trading-off authoring freedom and simulation quality. The flexibility and familiarity of natural language in particular, motivates the use of text to guide the generation process. Capturing the essence of textually described crowd movements in the form of meaningful and usable parameters, is challenging due to the lack of paired ground truth data, and inherent ambiguity between the two modalities. In this work, we leverage a pre-trained Large Language Model (LLM) to create pseudo-pairs of text and behaviour labels. We train a variational auto-encoder (VAE) on the synthetic dataset, constraining the latent space into interpretable behaviour parameters by incorporating a latent label loss. To showcase our model's capabilities, we deploy a survey where humans provide textual descriptions of real crowd datasets. We demonstrate that our model is able to parameterise unseen sentences and produce novel behaviours, capturing the essence of the given sentence; our behaviour space is compatible with simulator parameters, enabling the generation of plausible crowds (text-to-crowds). Also, we conduct feasibility experiments exhibiting the potential of the output text embeddings in the premise of full sentence generation from a behaviour profile.Item Behavioral Landmarks: Inferring Interactions from Data(The Eurographics Association, 2024) Lemonari, Marilena; Charalambous, Panayiotis; Panayiotou, Andreas; Chrysanthou, Yiorgos; Pettré, Julien; Liu, Lingjie; Averkiou, MelinosWe aim to unravel complex agent-environment interactions from trajectories, by explaining agent paths as combinations of predefined basic behaviors. We detect trajectory points signifying environment-driven behavior changes, ultimately disentangling interactions in space and time; our framework can be used for environment synthesis and authoring, shown by our case studies.Item