Issue 3EG Conference Issuehttps://diglib.eg.org:443/handle/10.2312/962024-03-28T14:56:28Z2024-03-28T14:56:28ZBinding Virtual Environments to Toolkit CapabilitiesSmith, Shamus P.Duke, David J.https://diglib.eg.org:443/handle/10.2312/88462017-03-16T14:57:06Z2000-01-01T00:00:00ZBinding Virtual Environments to Toolkit Capabilities
Smith, Shamus P.; Duke, David J.
There are many toolkits and development environments that aid the process of constructing virtual environment applications. Many of these development environments encourage customising a virtual environment's design while rapid prototyping within the confines of a toolkit's capabilities. Thus the choice of the technology and its associated support has been made independent of the end-use requirements of the final system. This can bias a virtual environment's design by implementation based constraints. We propose that an alternative approach is the consideration of virtual environment requirements in the context of an inspectable design model, to identify the requirements that a toolkit will need to support. In the context of an example, we present a selection of design requirements that we consider important for virtual environment design in general. We explore how these requirements might be mapped to different capabilities using Virtual Reality Modelling Language (VRML) as a concrete example of a platform technology.
2000-01-01T00:00:00ZModelling virtual cities dedicated to behavioural animationThomas, GwenolaDonikian, Stephanehttps://diglib.eg.org:443/handle/10.2312/88452017-03-16T14:57:05Z2000-01-01T00:00:00ZModelling virtual cities dedicated to behavioural animation
Thomas, Gwenola; Donikian, Stephane
In order to populate virtual cities, it is necessary to specify the behaviour of dynamic entities such as pedestrians or car drivers. Since a complete mental model based on vision and image processing cannot be constructed in real time using purely geometrical information, higher levels of information are needed in a model of the virtual environment. For example, the autonomous actors of a virtual world would exploit the knowledge of the environment topology to navigate through it. In this article, we present a model of virtual urban environments using structures and information suitable for behavioural animations. Thanks to this knowledge, autonomous virtual actors can behave like pedestrians or car drivers in a complex city environment. A city modeler has been designed, using this model of urban environment, and enables complex urban environments for behavioural animation to be automatically produced.
2000-01-01T00:00:00ZLCTS: Ray Shooting using Longest Common Traversal SequencesHavran, V.Bittner, J.https://diglib.eg.org:443/handle/10.2312/88442017-03-16T14:57:05Z2000-01-01T00:00:00ZLCTS: Ray Shooting using Longest Common Traversal Sequences
Havran, V.; Bittner, J.
We describe two new techniques of ray shooting acceleration that exploit the traversal coherence of a spatial hierarchy. The first technique determines a sequence of adjacent leaf-cells of the hierarchy that is pierced by all rays contained within a certain convex shaft. This sequence is used to accelerate ray shooting for all remaining rays within the shaft. The second technique establishes a cut of the hierarchy that contains nodes where the hierarchy traversal can no longer be predetermined for all rays contained within a given shaft. This cut is used to initiate the traversal for all remaining rays contained in the shaft. The description of the methods is followed by results evaluated by their practical implementation.
2000-01-01T00:00:00ZAutomatic Generation of Virtual Woodblocks and Multicolor Woodblock PrintingMizuno, S.Kasaura, T.Okouchi, T.Yamamoto, S.Okada, M.Toriwaki, J.https://diglib.eg.org:443/handle/10.2312/88432017-03-16T14:57:04Z2000-01-01T00:00:00ZAutomatic Generation of Virtual Woodblocks and Multicolor Woodblock Printing
Mizuno, S.; Kasaura, T.; Okouchi, T.; Yamamoto, S.; Okada, M.; Toriwaki, J.
In this paper, we study a method to synthesize a multicolor virtual woodblock print by using several virtual woodblocks. It consists of two sections: carving and printing, to synthesize a virtual print. In the carving section, virtual woodblocks are generated by a user with supporting of an automatically carving method based on feature extraction of a gray value image. And woodblocks are also generated automatically by using a full-color image as a draft. In the printing section, a "paper sheet", a "printing brush" and "ink" are prepared in addition to the "woodblock" in the virtual space and the user synthesizes a woodblock print interactively. As the printing factors, a color of ink, a moisture value and a grain change the finish of the print. Using several virtual woodblocks and printing to a paper sheet in succession, a printing image of each woodblock is combined based on the printing factors and a multicolor virtual prints is synthesized.
2000-01-01T00:00:00Z