SBM17: Sketch Based Interfaces and Modeling 2017978-1-4503-5080-8https://diglib.eg.org:443/handle/10.2312/26318522024-03-19T13:24:01Z2024-03-19T13:24:01ZSketch and Shade : An interactive assistant for sketching and shadingParakkat, Amal DevJoshi, Sarang AnilPundarikaksha, Uday BondiMuthuganapathy, Ramanathanhttps://diglib.eg.org:443/handle/10.2312/sbim2017a092022-03-28T08:45:51Z2017-01-01T00:00:00ZSketch and Shade : An interactive assistant for sketching and shading
Parakkat, Amal Dev; Joshi, Sarang Anil; Pundarikaksha, Uday Bondi; Muthuganapathy, Ramanathan
Holger Winnemoeller and Lyn Bartram
We present a drawing assistant for sketching and for assisting users in shading a hand drawn sketch. The augmented reality based system uses a sketch made by a professional and uses it to help inexperienced users to do sketching and shading. The input image is converted to a set of points based on simple heuristics for providing a “connect the dots'' interface for a user to aid sketching. With the help of a 2.5D mesh generated by our algorithm, the system assists the user by providing information about the colors that can be given in different parts of the sketch. The system was tested with users of different age groups and skill levels, indicating its usefulness.
2017-01-01T00:00:00ZCharacterizing User Behavior for Speech and Sketch-based Video Retrieval InterfacesAltıok, Ozan CanSezgin, Tev k Metinhttps://diglib.eg.org:443/handle/10.2312/sbim2017a082022-03-28T08:45:43Z2017-01-01T00:00:00ZCharacterizing User Behavior for Speech and Sketch-based Video Retrieval Interfaces
Altıok, Ozan Can; Sezgin, Tev k Metin
Holger Winnemoeller and Lyn Bartram
From a user interaction perspective, speech and sketching make a good couple for describing motion. Speech allows easy speci cation of content, events and relationships, while sketching brings in spatial expressiveness. Yet, we have insu cient knowledge of how sketching and speech can be used for motion-based video retrieval, because there are no existing retrieval systems that support such interaction. In this paper, we describe a Wizard-of-Oz protocol and a set of tools that we have developed to engage users in a sketchand speech-based video retrieval task. We report how the tools and the protocol t together using ''retrieval of soccer videos'' as a use case scenario. Our so ware is highly customizable, and our protocol is easy to follow. We believe that together they will serve as a convenient and powerful duo for studying a wide range of multi-modal use cases.
2017-01-01T00:00:00ZModeling Go: A mobile sketch-based modeling system for extracting objectsLai, Chun-AnChiang, Pei-Yinghttps://diglib.eg.org:443/handle/10.2312/sbim2017a072022-03-28T08:45:52Z2017-01-01T00:00:00ZModeling Go: A mobile sketch-based modeling system for extracting objects
Lai, Chun-An; Chiang, Pei-Ying
Holger Winnemoeller and Lyn Bartram
This article presents an easy to use mobile application which allows users to create 3D digital copies of their interested objects anywhere and anytime. An advanced 3-sweep modeling technique is developed to construct 3D primitives not only from generalized cylinder and cuboid, but also objects with symmetrical or non-uniformly scaled profiles. In addition, our system supports the texture and structure refinement which combine results created from multiple source images. The constructed 3D model will be the combination of our 3D primitives. The combined result can preserve more features which may not be seen from a single photo.
2017-01-01T00:00:00ZShading with Painterly Filtered Layers: A Technique to Obtain Painterly Portrait AnimationsCastaneda, SaifAkleman, Ergunhttps://diglib.eg.org:443/handle/10.2312/sbim2017a062022-03-28T08:45:46Z2017-01-01T00:00:00ZShading with Painterly Filtered Layers: A Technique to Obtain Painterly Portrait Animations
Castaneda, Saif; Akleman, Ergun
Holger Winnemoeller and Lyn Bartram
In this manuscript, we describe a process that can be used to create still and/or animated portrait paintings to be shown in Expressive Art Exhibit. Our process consists of two stages: (1) Creation of control textures for a Barycentric shader by using color information gathered from photographs to provide realistic looking skin rendering; (2) Filtering and compositing the layers of images that are obtained by control textures, which correspond to effects such as diffuse, specular and ambient. To demonstrate proof-of-concept, we have created a few rigid body animations of painterly portraits under different lighting conditions.
2017-01-01T00:00:00Z