2023
Permanent URI for this collection
Browse
Browsing 2023 by Title
Now showing 1 - 20 of 27
Results Per Page
Sort Options
Item Computational Modeling and Design of Nonlinear Mechanical Systems and Materials(2023) Tang, PengbinNonlinear mechanical systems and materials are broadly used in diverse fields. However, their modeling and design are nontrivial as they require a complete understanding of their internal nonlinearities and other phenomena. To enable their efficient design, we must first introduce computational models to accurately characterize their complex behavior. Furthermore, new inverse design techniques are also required to capture how the behavior changes when we change the design parameters of nonlinear mechanical systems and materials. Therefore, in this thesis, we introduce three novel methods for computational modeling and design of nonlinear mechanical systems and materials. In the first article, we address the design problem of nonlinear mechanical systems exhibiting stable periodic motions in response to a periodic force. We present a computational method that utilizes a frequency-domain approach for dynamical simulation and the powerful sensitivity analysis for design optimization to design compliant mechanical systems with large-amplitude oscillations. Our method is versatile and can be applied to various types of compliant mechanical systems. We validate its effectiveness by fabricating and evaluating several physical prototypes. Next, we focus on the computation modeling and mechanical characterization of contact-dominated nonlinear materials, particularly Discrete Interlocking Materials (DIM), which are generalized chainmail fabrics made of quasi-rigid interlocking elements. Unlike conventional elastic materials for which deformation and restoring forces are directly coupled, the mechanics of DIM are governed by contacts between individual elements that give rise to anisotropic kinematic deformation constraints. To replicate the biphasic behavior of DIM without simulating expensive microscale structures, we introduce an efficient anisotropic strain-limiting method based on second-order cone programming (SOCP). Additionally, to comprehensively characterize strong anisotropy, complex coupling, and other nonlinear phenomena of DIM, we introduce a novel homogenization approach for distilling macroscale deformation limits from microscale simulations and develop a data-driven macromechanical model for simulating DIM with homogenized deformation constraints.Item Data-centric Design and Training of Deep Neural Networks with Multiple Data Modalities for Vision-based Perception Systems(University of the Basque Country, 2023-06-12) Aranjuelo, NereaThe advances in computer vision and machine learning have revolutionized the ability to build systems that process and interpret digital data, enabling them to mimic human perception and paving the way for a wide range of applications. In recent years, both disciplines have made significant progress, fueled by advances in deep learning techniques. Deep learning is a discipline that uses deep neural networks (DNNs) to teach machines to recognize patterns and make predictions based on data. Deep learning-based perception systems are increasingly prevalent in diverse fields, where humans and machines collaborate to combine their strengths. These fields include automotive, industry, or medicine, where enhancing safety, supporting diagnosis, and automating repetitive tasks are some of the aimed goals. However, data are one of the key factors behind the success of deep learning algorithms. Data dependency strongly limits the creation and success of a new DNN. The availability of quality data for solving a specific problem is essential but hard to obtain, even impracticable, in most developments. Data-centric artificial intelligence emphasizes the importance of using high-quality data that effectively conveys what a model must learn. Motivated by the challenges and necessity of data, this thesis formulates and validates five hypotheses on the acquisition and impact of data in DNN design and training. Specifically, we investigate and propose different methodologies to obtain suitable data for training DNNs in problems with limited access to large-scale data sources. We explore two potential solutions for obtaining data, which rely on synthetic data generation. Firstly, we investigate the process of generating synthetic training data using 3D graphics-based models and the impact of different design choices on the accuracy of obtained DNNs. Beyond that, we propose a methodology to automate the data generation process and generate varied annotated data by replicating a 3D custom environment given an input configuration file. Secondly, we propose a generative adversarial network (GAN) that generates annotated images using both limited annotated data and unannotated in-the-wild data. Typically, limited annotated datasets have accurate annotations but lack realism and variability, which can be compensated for by the in-the-wild data. We analyze the suitability of the data generated with our GAN-based method for DNN training. This thesis also presents a data-oriented DNN design, as data can present very different properties depending on their source. We differentiate sources based on the sensor modality used to obtain the data (e.g., camera, LiDAR) or the data generation domain (e.g., real, synthetic). On the one hand, we redesign an image-oriented object detection DNN architecture to process point clouds from the LiDAR sensor and optionally incorporate information from RGB images. On the other hand, we adapt a DNN to learn from both real and synthetic images while minimizing the domain gap of learned features from data. We have validated our formulated hypotheses in various unresolved computer vision problems that are critical for numerous real-world vision-based systems. Our findings demonstrate that synthetic data generated using 3D models and environments are suitable for DNN training. However, we also highlight that the design choices during the generation process, such as lighting and camera distortion, significantly affect the accuracy of the resulting DNN. Additionally, we show that a simulation 3D environment can assist in designing better sensor setups for a target task. Furthermore, we demonstrate that GANs offer an alternative means of generating training data by exploiting labeled and existing unlabeled data to generate new samples that are suitable for DNN training without a simulation environment. Finally, we show that adapting DNN design and training to data modality and source can increase model accuracy. More specifically, we demonstrate that modifying a predefined architecture designed for images to accommodate the peculiarities of point clouds results in state-of-the-art performance in 3D object detection. The DNN can be designed to handle data from a single modality or leverage data from different sources. Furthermore, when training with real and synthetic data, considering their domain gap and designing a DNN architecture accordingly improves model accuracy.Item Deep Learning with Differentiable Rasterization for Vector Graphics analysis(2023-11) Reddy, PradyumnaImages and 2D geometric representations are two different ways of representing visual information. Vector graphics/2D geometric representations are widely used to represent patterns, fonts, logos, digital artworks, and graphic designs. In a 2D geometric representation, a shape is represented using geometric objects, such as points, lines, curves, and the relationship between these objects. Unlike pixel based image representation, geometric representations enjoy several advantages: they are compact, easy to resize to an arbitrary resolution without loss of visual quality, and amenable to stylization by simply adjusting parameters such as size, color, or stroke type. As a result, it is favored by graphic artists and designers. Advances in deep learning, have resulted in unprecedented progress in many classical image manipulation tasks. In the context of learning based methods, while a vast body of work has focused on algorithms for raster images, only a handful of options exist for 2D geometric representations. The goal of this thesis is to develop deep learning based methods for the analysis of visual information using 2D geometric representations. Our key hypothesis is that using image domain statistics for optimizing geometric representation attributes enables accurate capture and manipulation of the visual information. In this thesis, we explore three methods with this insight as our point of departure. First, we tackle the problem of editing and expanding patterns that are encoded as flat images. We present a Differentiable Compositing function that enables propagating gradients between a raster point pattern image and the parameters of individual elements in the pattern. We use these gradients to decompose raster pattern images into layered graphical objects thereby facilitating accurate editing of the patterns. We also show how the Differentiable Compositing function along with the perceptual style loss can be used to expand a pattern. Next, current neural network models specialized for vector graphics representation require explicit supervision of the vector graphics data at training time. This is not ideal because large-scale high-quality vector-graphics datasets are difficult to obtain. We introduce Im2Vec a neural network architecture that can estimate complex vector graphics with varying topologies. We leverage the Differentiable compositing function to robustly train the network in an end-to-end manner with only indirect supervision from readily available raster training images (i.e., with no vector counterparts). Finally, fonts when represented in a vector format cannot benefit from recent network architectures for neural representations alternatively rasterizing the vector into images with fixed resolution suffer from loss of data fidelity. We propose multi-implicits to represent complex fonts as a superposition of a set of permutation invariant neural implicit functions, without compromising font-specific features such as edges and corners. For all three tasks, we show the utility of our insight by evaluating each method extensively and showing superior performance over baseline methods.Item Design and Applications of Perception-Based Mesh, Image, and Display-Related Quality Metrics(Die Saarländische Universitäts- und Landesbibliothek (SULB), 2023) Krzysztof WolskiComputer graphics have become an integral part of our daily lives, enabling immersive experiences in movies, video games, virtual reality, and augmented reality. However, the various stages of the computer graphics pipeline, from content generation to rendering and display, present their own challenges that can reduce visual quality and thus degrade the overall experience. Perceptual metrics are crucial for evaluating visual quality. However, many existing methods have limitations in reproducing human perception accurately, as they must account for the complexities of the human visual system. This dissertation aims to tackle these issues by proposing innovative advancements across different pipeline stages. Firstly, it introduces a novel neural-based visibility metric to improve the assessment of near-threshold image distortions. Secondly, it addresses shortcomings of the mesh quality metrics, vital for enhancing the integrity of three-dimensional models. Thirdly, the dissertation focuses on optimizing the visual quality of animated content while considering display characteristics and a limited rendering budget. Finally, the work delves into the challenges specific to stereo vision in a virtual reality setting. The ultimate objective is to enable the creation of more efficient and automated designs for virtual experiences, benefiting fields like entertainment and education. Through these contributions, this research seeks to elevate the standard of visual quality in computer graphics, enriching the way we interact with virtual worlds.Item Efficient and Expressive Microfacet Models(Charles University, 2023) Atanasov, AsenIn realistic appearance modeling, rough surfaces that have microscopic details are described using so-called microfacet models. These include analytical models that statistically define a physically-based microsurface. Such models are extensively used in practice because they are inexpensive to compute and offer considerable flexibility in terms of appearance control. Also, small but visible surface features can easily be added to them through the use of a normal map. However, there are still areas in which this general type of model can be improved: important features like anisotropy control sometimes lack analytic solutions, and the efficient rendering of normal maps requires accurate and general filtering algorithms. We advance the state of the art with regard to such models in these areas: we derive analytic anisotropic models, reformulate the filtering problem and propose an efficient filtering algorithm based on a novel filtering data structure. Specifically, we derive a general result in microfacet theory: given an arbitrary microsurface defined via standard microfacet statistics, we show how to construct the statistics of its linearly transformed counterparts. This leads to a simple closed-form expression for anisotropic variations of a given surface that generalizes previous work by supporting all microfacet distributions and all invertible tangential linear transformations. As a consequence, our approach allows transferring macrosurface deformations to the microsurface, so as to render its corresponding complex anisotropic appearance. Furthermore, we analyze the filtering of the combined effect of a microfacet BRDF and a normal map. We show that the filtering problem can be expressed as an Integral Histogram (IH) evaluation. Due to the high memory usage of IHs, we develop the Inverse Bin Map (IBM): a form of an IH that is very compact and fast to build. Based on the IBM, we present a highly memory-efficient technique for filtering normal maps that is targeted at the accurate rendering of glints, but in contrast with previous approaches also offers roughness control.Item Efficient image-based rendering(2023-07-12) Bemana, MojtabaRecent advancements in real-time ray tracing and deep learning have significantly enhanced the realism of computer-generated images. However, conventional 3D computer graphics (CG) can still be time-consuming and resource-intensive, particularly when creating photo-realistic simulations of complex or animated scenes. Image-based rendering (IBR) has emerged as an alternative approach that utilizes pre-captured images from the real world to generate realistic images in real-time, eliminating the need for extensive modeling. Although IBR has its advantages, it faces challenges in providing the same level of control over scene attributes as traditional CG pipelines and accurately reproducing complex scenes and objects with different materials, such as transparent objects. This thesis endeavors to address these issues by harnessing the power of deep learning and incorporating the fundamental principles of graphics and physical-based rendering. It offers an efficient solution that enables interactive manipulation of real-world dynamic scenes captured from sparse views, lighting positions, and times, as well as a physically-based approach that facilitates accurate reproduction of the view dependency effect resulting from the interaction between transparent objects and their surrounding environment. Additionally, this thesis develops a visibility metric that can identify artifacts in the reconstructed IBR images without observing the reference image, thereby contributing to the design of an effective IBR acquisition pipeline. Lastly, a perception-driven rendering technique is developed to provide high-fidelity visual content in virtual reality displays while retaining computational efficiency.Item Fully Controllable Data Generation For Realtime Face Capture(ETH Zurich, 2023-01-31) Chandran, PrashanthData driven realtime face capture has gained considerable momentum in the last few years thanks to deep neural networks that leverage specialized datasets to speedup the acquisition of face geometry and appearance. However generalizing such neural solutions to generic in-the-wild face capture continues to remain a challenge due to the lack of, or a means to generate a high quality in-the-wild face database with all forms of groundtruth (geometry, appearance, environment maps, etc.). In this thesis we recognize this data bottleneck and propose a comprehensive framework for controllable, high quality, in-the-wild data generation that can support present and future applications in face capture. We approach this problem in four stages starting with the building of a high quality 3D face database consisting of a few hundred subjects in a studio setting. This database will serve as a strong prior for 3D face geometry and appearance for several methods discussed in this thesis. To build this 3D database and to automate the registration of scans to a template mesh, we propose the first deep facial landmark detector capable of operating on 4K resolution imagery while also achieving state-of-the-art performance on several in-the-wild benchmarks. Our second stage leverages the proposed 3D face database to build powerful nonlinear 3D morphable models for static geometry modelling and synthesis. We propose the first semantic deep face model that combines the semantic interpretability of traditional 3D morphable models with the nonlinear expressivity of neural networks. We later extend this semantic deep face model with a novel transformer based architecture and propose the Shape Transformer, for representing and manipulating face shapes irrespective of their mesh connectivity. The third stage of our data generation pipeline involves extending the approaches for static geometry synthesis to support facial deformations across time so as to synthesize dynamic performances. To synthesize facial performances we propose two parallel approaches, one involving performance retargeting and another based on a data driven 4D (3D + time) morphable model. We propose a local anatomically constrained facial performance retargeting technique that uses only a handful of blendshapes (20 shapes) to achieve production quality results. This retargeting technique can readily be used to create novel animations for any given actor via animation transfer. Our second contribution for generating facial performances is through a transformer based 4D autoencoder that encodes a sequence of expression blend weights into a learned performance latent space. Novel performances can then be generated at inference time by sampling this learned latent space. The fourth and final stage of our data generation pipeline involves the creation of photorealistic imagery that can go along with the facial geometry and animations synthesized thus far. We propose a hybrid rendering approach that leverages state-of-the-art techniques for ray traced skin rendering and a pretrained 2D generative model for photorealistic and consistent inpainting of the skin renders. Our hybrid rendering technique allows for the creation of an infinite number of training samples where the user has full control over the facial geometry, appearance, lighting and viewpoint. The techniques presented in this thesis will serve as the foundation for creating large scale photorealistic in-the-wild face datasets to support the next generation of realtime face capture.Item Hierarchical Gradient Domain Vector Field Processing(Johns Hopkins University, 2023-07-31) Lee, Sing ChunVector fields are a fundamental mathematical construct for describing flow-field-related problems in science and engineering. To solve these types of problems effectively on a discrete surface, various vector field representations are proposed using finite dimensional bases, a discrete connection, and an operator approach. Furthermore, for computational efficiency, quadratic Dirichlet energy is preferred to measure the smoothness of the vector field in the gradient domain. However, while quadratic energy gives a simple linear system, it does not support real-time vector field processing on a high-resolution mesh without extensive GPU parallelization. To this end, this dissertation describes an efficient hierarchical solver for vector field processing. Our method extends the successful multigrid design for interactive signal processing on meshes using an induced vector field prolongation combing it with novel speedup techniques. We formulate a general way for extending scalar field prolongation to vector fields. Focusing on triangle meshes, our convergence study finds that a standard multigrid does not achieve fast convergence due to the poorly-conditioned system matrix. We observe a similar performance in standard single-level iterative methods such as the Jacobi, Gauss-Seidel, and conjugate gradient methods. Therefore, we compare three speedup techniques -- successive over-relaxation, smoothed prolongation, and Krylov subspace update, and incorporate them into our solver. Finally, we demonstrate our solver on useful applications such as logarithmic map computation and discuss the applications to other hierarchies such as texture grids, followed by the conclusion and future work.Item Image Classification of High Variant Objects in Fast Industrial Applications(TUprints, 2023-11-13) Siegmund,DirkRecent advances in machine learning and image processing have expanded the applications of computer vision in many industries. In industrial applications, image classification is a crucial task since high variant objects present difficult problems because of their variety and constant change in attributes. Computer vision algorithms can function effectively in complex environments, working alongside human operators to enhance efficiency and data accuracy. However, there are still many industries facing difficulties with automation that have not yet been properly solved and put into practice. They have the need for more accurate, convenient, and faster methods. These solutions drove my interest in combining multiple learning strategies as well as sensors and image formats to enable the use of computer vision for these applications. The motivation for this work is to answer a number of research questions that aim to mitigate current problems in hinder their practical application. This work therefore aims to present solutions that contribute to enabling these solutions. I demonstrate why standard methods cannot simply be applied to an existing problem. Each method must be customized to the specific application scenario in order to obtain a working solution. One example is face recognition where the classification performance is crucial for the system’s ability to correctly identify individuals. Additional features would allow higher accuracy, robustness, safety, and make presentation attacks more difficult. The detection of attempted attacks is critical for the acceptance of such systems and significantly impacts the applicability of biometrics. Another application is tailgating detection at automated entrance gates. Especially in high security environments it is important to prevent that authorized persons can take an unauthorized person into the secured area. There is a plethora of technology that seem potentially suitable but there are several practical factors to consider that increase or decrease applicability depending which method is used. The third application covered in this thesis is the classification of textiles when they are not spread out. Finding certain properties on them is complex, as these properties might be inside a fold, or differ in appearance because of shadows and position. The first part of this work provides in-depth analysis of the three individual applications, including background information that is needed to understand the research topic and its proposed solutions. It includes the state of the art in the area for all researched applications. In the second part of this work, methods are presented to facilitate or enable the industrial applicability of the presented applications. New image databases are initially presented for all three application areas. In the case of biometrics, three methods that identify and improve specific performance parameters are shown. It will be shown how melanin face pigmentation (MFP) features can be extracted and used for classification in face recognition and PAD applications. In the entrance control application, the focus is on the sensor information with six methods being presented in detail. This includes the use of thermal images to detect humans based on their body heat, depth images in form of RGB-D images and 2D image series, as well as data of a floor mounted sensor-grid. For textile defect detection several methods and a novel classification procedure, in free-fall is presented. In summary, this work examines computer vision applications for their practical industrial applicability and presents solutions to mitigate the identified problems. In contrast to previous work, the proposed approaches are (a) effective in improving classification performance (b) fast in execution and (c) easily integrated into existing processes and equipment.Item Inverse Shape Design with Parametric Representations: Kirchhoff Rods and Parametric Surface Models(Institute of Science and Technology Austria, 2023-05) Hafner, ChristianInverse design problems in fabrication-aware shape optimization are typically solved on discrete representations such as polygonal meshes. This thesis argues that there are benefits to treating these problems in the same domain as human designers, namely, the parametric one. One reason is that discretizing a parametric model usually removes the capability of making further manual changes to the design, because the human intent is captured by the shape parameters. Beyond this, knowledge about a design problem can sometimes reveal a structure that is present in a smooth representation, but is fundamentally altered by discretizing. In this case, working in the parametric domain may even simplify the optimization task. We present two lines of research that explore both of these aspects of fabrication-aware shape optimization on parametric representations. The first project studies the design of plane elastic curves and Kirchhoff rods, which are common mathematical models for describing the deformation of thin elastic rods such as beams, ribbons, cables, and hair. Our main contribution is a characterization of all curved shapes that can be attained by bending and twisting elastic rods having a stiffness that is allowed to vary across the length. Elements like these can be manufactured using digital fabrication devices such as 3d printers and digital cutters, and have applications in free-form architecture and soft robotics. We show that the family of curved shapes that can be produced this way admits geometric description that is concise and computationally convenient. In the case of plane curves, the geometric description is intuitive enough to allow a designer to determine whether a curved shape is physically achievable by visual inspection alone. We also present shape optimization algorithms that convert a user-defined curve in the plane or in three dimensions into the geometry of an elastic rod that will naturally deform to follow this curve when its endpoints are attached to a support structure. Implemented in an interactive software design tool, the rod geometry is generated in real time as the user edits a curve and enables fast prototyping. The second project tackles the problem of general-purpose shape optimization on CAD models using a novel variant of the extended finite element method (XFEM). Our goal is the decoupling between the simulation mesh and the CAD model, so no geometry-dependent meshing or remeshing needs to be performed when the CAD parameters change during optimization. This is achieved by discretizing the embedding space of the CAD model, and using a new high-accuracy numerical integration method to enable XFEM on free-form elements bounded by the parametric surface patches of the model. Our simulation is differentiable from the CAD parameters to the simulation output, which enables us to use off-the-shelf gradient-based optimization procedures. The result is a method that fits seamlessly into the CAD workflow because it works on the same representation as the designer, enabling the alternation of manual editing and fabrication-aware optimization at will.Item Latent Disentanglement for the Analysis and Generation of Digital Human Shapes(2023-06-30) Foti, SimoneAnalysing and generating digital human shapes is crucial for a wide variety of applications ranging from movie production to healthcare. The most common approaches for the analysis and generation of digital human shapes involve the creation of statistical shape models. At the heart of these techniques is the definition of a mapping between shapes and a low-dimensional representation. However, making these representations interpretable is still an open challenge. This thesis explores latent disentanglement as a powerful technique to make the latent space of geometric deep learning based statistical shape models more structured and interpretable. In particular, it introduces two novel techniques to disentangle the latent representation of variational autoencoders and generative adversarial networks with respect to the local shape attributes characterising the identity of the generated body and head meshes. This work was inspired by a shape completion framework that was proposed as a viable alternative to intraoperative registration in minimally invasive surgery of the liver. In addition, one of these methods for latent disentanglement was also applied to plastic surgery, where it was shown to improve the diagnosis of craniofacial syndromes and aid surgical planning.Item Machine Learning Supported Interactive Visualization of Hybrid 3D and 2D Data for the Example of Plant Cell Lineage Specification(2023) Hong, JiayiAs computer graphics technologies develop, spatial data can be better visualized in the 3D environment so that viewers can observe 3D shapes and positions clearly. Meanwhile, 2D abstract visualizations can present summarized information, visualize additional data, and control the 3D view. Combining these two parts in one interface can assist people in finishing complicated tasks, especially in scientific domains, though there is a lack of design guidelines for the interaction. Generally, experts need to analyze large amounts of scientific data to finish challenging tasks. For example, in the biological field, biologists need to build the hierarchy tree for an embryo with more than 200 cells. In this case, manual work can be time-consuming and tedious, and machine learning algorithms have the potential to alleviate some of the tedious manual processes to serve as the basis for experts. These predictions, however, contain hierarchical and multi-layer information, and it is essential to visualize them sequentially and progressively so that experts can control their viewing pace and validation. Also, 3D and 2D representations, together with machine learning predictions, need to be visually and interactively connected in the system.In this thesis, we worked on the cell lineage problem for plant embryos as an example to investigate a visualization system and its interaction design that makes use of combinations of 3D and 2D representations as well as visualizations for machine learning. We first investigated the 3D selection interaction techniques for the plant embryo. The cells in a plant embryo are tightly packed together, without any space in between. Traditional techniques can hardly deal with such an occlusion problem. We conducted a study to evaluate three different selection techniques and found out that the combination of the Explosion Selection technique and the List Selection technique works well for people to get access to and observe plant cells in an embryo. These techniques can also be extended to other similar densely packed 3D data. Second, we explored the visualization and interaction de-sign to combine the 3D visualizations of a plant embryo with its associated 2D hierarchy tree. We designed a system with such combinations for biologists to examine the plant cells and record the development history in the hierarchy tree. We support the hierarchy building in two directions, both constructing the history top-down using the lasso selection in a 3D environment and bottom-up as the traditional workflow does in the hierarchy tree. We also added a neural network model to give predictions about the assignments for biologists to start with. We conducted an evaluation with biologists, which showed that both 3D and 2D representations help with making decisions, and the tool can inspire insights for them. One main drawback was that the performance of the machine learning model was not ideal. Thus, to assist the process and enhance the model performance, in an improved version of our system, we trained five different ML models and visualized the predictions and their associated uncertainty. We performed a study, and the results indicated that our designed ML representations are easy to understand and that the new tool can effectively improve the efficiency of assigning the cell lineage.Item Modeling and simulating virtual terrains(2023-03-21) Paris, AxelThis PhD, entitled "Modeling and simulating virtual terrains" is related to digital content creation and geological simulations, in the context of virtual terrains. Real terrains exhibit landforms of different scales (namely microscale, mesoscale, and macroscale), formed by multiple interconnected physical processes operating at various temporal and spatial scales. On a computer, landforms are usually represented by elevation models, but features such as arches and caves require a volumetric representation. The increasing needs for realism and larger worlds bring new challenges that existing techniques do not fulfill. This thesis is organized in two parts. First, we observe that several macroscale landforms, such as desert landscapes made of sand dunes and meandering rivers, simply cannot be modeled by existing techniques. Thus, we develop new simulations, inspired by research in geomorphology, to generate these landforms. We particularly focus on the plausibility of our results and user control, which is a key requirement in Computer Graphics. In the second part, we address the modeling and generation of volumetric landforms in virtual terrains. Existing models are often based on voxels and have a high memory impact, which forbids their use at a large-scale. Instead, we develop a new model based on signed distance functions for representing volumetric landforms, such as arches, overhangs and caves with a low memory footprint. We show that this representation is adapted to generating volumetric landforms across a range of scales (microscale, mesoscale, and macroscale).Item Neural Mesh Reconstruction(Simon Fraser University, 2023-06-16) Chen, ZhiqinDeep learning has revolutionized the field of 3D shape reconstruction, unlocking new possibilities and achieving superior performance compared to traditional methods. However, despite being the dominant 3D shape representation in real-world applications, polygon meshes have been severely underutilized as a representation for output shapes in neural 3D reconstruction methods. One key reason is that triangle tessellations are irregular, which poses challenges for generating them using neural networks. Therefore, it is imperative to develop algorithms that leverage the power of deep learning while generating output shapes in polygon mesh formats for seamless integration into real-world applications. In this thesis, we propose several data-driven approaches to reconstruct explicit meshes from diverse types of input data, aiming to address this challenge. Drawing inspiration from classical data structures and algorithms in computer graphics, we develop representations to effectively represent meshes within neural networks. First, we introduce BSP-Net. Inspired by a classical data structure Binary Space Partitioning (BSP), we represent a 3D shape as a union of convex primitives, where each convex primitive is obtained by intersecting half-spaces. This 3-layer BSP-tree representation allows a shape to be stored in a 3-layer multilayer perceptron (MLP) as a neural implicit, while an exact polygon mesh can be extracted from the MLP weights by parsing the underlying BSP-tree. BSP-Net is the first deep neural network that is able to produce compact and watertight polygon meshes natively, and the generated meshes are capable of representing sharp geometric features. We demonstrate its effectiveness in the task of single-view 3D reconstruction. Next, we introduce a series of works that reconstruct explicit meshes by storing meshes in regular grid structures. We present Neural Marching Cubes (NMC), a data-driven algorithm for reconstructing meshes from discretized implicit fields. NMC is built upon Marching Cubes (MC), but it learns the vertex positions and local mesh topologies from example training meshes, thereby avoiding topological errors and achieving better reconstruction of geometric features, especially sharp features such as edges and corners, compared to MC and its variants. In our subsequent work, Neural Dual Contouring (NDC), we replace the MC meshing algorithm with Dual Contouring (DC) with slight modifications, so that our algorithm can reconstruct meshes from both signed inputs, such as signed distance fields or binary voxels, and unsigned inputs, such as unsigned distance fields or point clouds, with high accuracy and fast inference speed in a unified framework. Furthermore, inspired by the volume rendering algorithm in Neural Radiance Fields (NeRF), we introduce differentiable rendering to NDC to arrive at MobileNeRF, a NeRF-based method for reconstructing objects and scenes as triangle meshes with view-dependent textures from multi-view images. MobileNeRF is the first NeRF-based method that is able to run on mobile phones and AR/VR platforms thanks to the explicit mesh representation, demonstrating its efficiency and compatibility on common devices.Item Neural Networks for Digital Materials and Radiance Encoding(2023-07) Rodriguez-Pardo, CarlosRealistic virtual scenes are becoming increasingly prevalent in our society, with a wide range of applications in areas such as manufacturing, architecture, fashion design, and entertainment, including movies, video games, and augmented and virtual reality. Generating realistic images of such scenes requires highly accurate illumination, geometry, and material models, which can be time-consuming and challenging to obtain. Traditionally, such models have often been created manually by skilled artists, but this process can be prohibitively time-consuming and costly. Alternatively, real-world examples can be captured, but this approach presents additional challenges in terms of accuracy and scalability. Moreover, while realism and accuracy are crucial in such processes, rendering efficiency is also a key requirement, so that lifelike images can be generated with the speed required in many real-world applications. One of the most significant challenges in this regard is the acquisition and representation of materials, which are a critical component of our visual world and, by extension, of virtual representations of it. However, existing approaches for material acquisition and representation are limited in terms of efficiency and accuracy, which limits their real-world impact. To address these challenges, data-driven approaches that leverage machine learning may provide viable solutions. Nevertheless, designing and training machine learning models that meet all these competing requirements remains a challenging task, requiring careful consideration of trade-offs between quality and efficiency. In this thesis, we propose novel learning-based solutions to address several key challenges in physically-based rendering and material digitization. Our approach leverages various forms of neural networks to introduce innovative algorithms for radiance encoding, digital material generation, edition, and estimation. First, we present a visual attribute transfer framework for digital materials that can effectively generalize to new illumination conditions and geometric distortions. We showcase a use-case of this method for high-resolution material acquisition using a custom device. Additionally, we propose a generative model capable of synthesizing tileable textures from a single input image, which helps improve the quality of material rendering. Building upon recent work in neural fields, we also introduce a material representation that accurately encodes material reflectance while offering powerful editing and propagation capabilities. In addition to reflectance, we present a novel method for global illumination encoding that leverages carefully designed generative models to achieve significantly faster sampling than previous work. Finally, we propose two innovative methods for low-cost material digitization. With flatbed scanners as our capture device, we present a generative model that can provide high-resolution material reflectance estimations using a single image as input, while introducing an uncertainty quantification algorithm that increases its reliability and efficiency. Additionally, we present a novel method for digitizing fabric mechanical properties using depth images as input, which we extend with a perceptually-validated drape similarity metric. Overall, the contributions of this thesis represent significant advances in the fields of radiance encoding and digital material acquisition and edition, enhancing the quality, scalability, and efficiency of physically-based rendering pipelines.Item New tools for modelling sensor data(2023-06-30) López Ruiz, AlfonsoThe objective of this thesis is to develop a framework capable of handling multiple data sources by correcting and fusing them to monitor, predict, and optimize real-world processes. The scope is not limited to images but also covers the reconstruction of 3D point clouds integrating visible, multispectral, thermal and hyperspectral data. However, working with real-world data is also tedious as it involves multiple steps that must be performed manually, such as collecting data, marking control points or annotating points. Instead, an alternative is to generate synthetic data from realistic scenarios, hence avoiding the acquisition of prohibitive technology and efficiently constructing large datasets. In addition, models in virtual scenarios can be attached to semantic annotations and materials, among other properties. Unlike manual annotations, synthetic datasets do not introduce spurious information that could mislead the algorithms that will use them. Remotely sensed images, albeit showing notable radiometric changes, can be fused by optimizing the correlation among them. This thesis exploits the Enhanced Correlation Coefficient image-matching algorithm to overlap visible, multispectral and thermal data. Then, multispectral and thermal data are projected into a dense RGB point cloud reconstructed with photogrammetry. By projecting and not directly reconstructing, the aim is to achieve geometrically accurate and dense point clouds from low-resolution imagery. In addition, this methodology is notably more efficient than GPU-based photogrammetry in commercial software. Radiometric data is ensured to be correct by identifying the occlusion of points as well as by minimizing the dissimilarity of aggregated data from the starting samples. Hyperspectral data is, on the other hand, projected over 2.5D point clouds with a pipeline adapted to push-broom scanning. The hyperspectral swaths are geometrically corrected and overlapped to compose an orthomosaic. Then, it is projected over a voxelized point cloud. Due to the large volume of the resulting hypercube, it is compressed following a stack-based representation in the radiometric dimension. The real-time rendering of the compressed hypercube is enabled by iteratively constructing an image in a few frames, thus reducing the overhead of single frames. In contrast, the generation of synthetic data is focused on LiDAR technology. The baseline of this simulation is the indexing of scenarios with a high level of detail in state-of-the-art ray-tracing data structures that help to rapidly solve ray-triangle intersections. From here, random and systematic errors are introduced, such as outliers, jittering of rays and return losses, among others. In addition, the construction of large LiDAR datasets is supported by the procedural generation of scenes that can be enriched with semantic annotations and materials. Airborne and terrestrial scans are parameterized to be fed with datasheets from commercial sensors. The airborne scans integrate several scan geometries, whereas the intensity of returns is estimated with BRDF databases that collect samples from a gonio-photometer. In addition, the simulated LiDAR can operate at different wavelengths, including bathymetry, and emulates several returns. This thesis is concluded by showing the benefits of fused data and synthetic datasets with three case studies. The LiDAR simulation is employed to optimize scanning plans in buildings by using local searches to determine optimal scan locations while minimizing the number of required scans with the help of genetic algorithms. These metaheuristics are guided by four objective functions that evaluate the accuracy, coverage, detail, and overlapping of the LiDAR scans. Then, thermal infrared point clouds and orthorectified maps are used to locate buried remains and reconstruct the structure of a poorly conserved archaeological site, highlighting the potential of remotely sensed data to support the preservation of cultural heritage. Finally, hyperspectral data is corrected and transformed to train a convolutional neural network in pursuit of classifying different grapevine varieties.Item Optimization of Photogrammetric 3D Assets(Università degli Studi di Milano, 2023) Maggiordomo, AndreaThe photogrammetric 3D reconstruction pipeline has emerged in recent years as the prominent technology for a variety of use cases, from heritage documentation to the authoring of high-quality photo-realistic assets for interactive graphics applications. However, the reconstruction process is prone to introduce modeling artifacts, and models generated with such tools often require careful post-processing before being ready for use in downstream applications. In this thesis, we face the problem of optimizing 3D models from this particular data source under several quality metrics. We begin by providing an analysis of their distinguishing features, focusing on the defects stemming from the automatic reconstruction process, and then we propose robust methods to optimize high-resolution 3D models under several objectives, addressing some of the identified recurring defects. First, we introduce a method to reduce the fragmentation of photo-reconstructed texture atlases, producing a more compact UV-map encoding with significantly fewer seams while minimizing the resampling artifacts introduced when synthesizing the new texture images. Next, we design a method to address texturing defects. Our technique combines recent advancements in image inpainting achieved by modern CNN-based models with the definition of a local texture-space inpainting domain that is semantically coherent, overcoming limitations of alternative texture inpainting strategies that operate in screen-space or global texture-space. Finally, we propose a method to efficiently represent densely tessellated 3D data by adopting a recently introduced efficient representation of displacement-mapped surfaces that benefits from GPU hardware support and does not require explicit parametrization of the 3D surface. Our approach generates a compact representation of high-resolution textured models, augmented with a texture map that is optimized and tailored to the displaced surface.Item Path from Photorealism to Perceptual Realism(2022-09) Zhong, FangchengPhotorealism in computer graphics --- rendering images that appear as realistic as photographs --- has matured to the point that it is now widely used in industry. With emerging 3D display technologies, the next big challenge in graphics is to achieve Perceptual Realism --- producing virtual imagery that is perceptually indistinguishable from real-world 3D scenes. Such a significant upgrade in the level of realism offers highly immersive and engaging experiences that have the potential to revolutionise numerous aspects of life and society, including entertainment, social connections, education, business, scientific research, engineering, and design. While perceptual realism puts strict requirements on the quality of reproduction, the virtual scene does not have to be identical in light distributions to its physical counterpart to be perceptually realistic, providing that it is visually indistinguishable to human eyes. Due to the limitations of human vision, a significant improvement in perceptual realism can, in principle, be achieved by fulfilling the essential visual requirements with sufficient qualities and without having to reconstruct the physically accurate distribution of light. In this dissertation, we start by discussing the capabilities and limits of the human visual system, which serves as a basis for the analysis of the essential visual requirements for perceptual realism. Next, we introduce a Perceptually Realistic Graphics (PRG) pipeline consisting of the acquisition, representation, and reproduction of the plenoptic function of a 3D scene. Finally, we demonstrate that taking advantage of the limits and mechanisms of the human visual system can significantly improve this pipeline. Specifically, we present three approaches to push the quality of virtual imagery towards perceptual realism. First, we introduce DiCE, a real-time rendering algorithm that exploits the binocular fusion mechanism of the human visual system to boost the perceived local contrast of stereoscopic displays. The method was inspired by an established model of binocular contrast fusion. To optimise the experience of binocular fusion, we proposed and empirically validated a rivalry-prediction model that better controls rivalry. Next, we introduce Dark Stereo, another real-time rendering algorithm that facilitates depth perception from binocular depth cues for stereoscopic displays, especially those under low luminance. The algorithm was designed based on a proposed model of stereo constancy that predicts the precision of binocular depth cues for a given contrast and luminance. Both DiCE and Dark Stereo have been experimentally demonstrated to be effective in improving realism. Their real-time performance also makes them readily integrable into any existing VR rendering pipeline. Nonetheless, only improving rendering is not sufficient to meet all the visual requirements for perceptual realism. The overall fidelity of a typical stereoscopic VR display is still confined by its limited dynamic range, low spatial resolution, optical aberrations, and vergence-accommodation conflicts. To push the limits of the overall fidelity, we present a High-Dynamic-Range Multi-Focal Stereo display (HDRMFS display) with an end-to-end imaging and rendering system. The system can visually reproduce real-world 3D objects with high resolution, accurate colour, a wide dynamic range and contrast, and most depth cues, including binocular disparity and focal depth cues, and permits a direct comparison between real and virtual scenes. It is the first work that achieves a close perceptual match between a physical 3D object and its virtual counterpart. The fidelity of reproduction has been confirmed by a Visual Turing Test (VTT) where naive participants failed to discern any difference between the real and virtual objects in more than half of the trials. The test provides insights to better understand the conditions necessary to achieve perceptual realism. In the long term, we foresee this system as a crucial step in the development of perceptually realistic graphics, for not only a quality unprecedentedly achieved but also a fundamental approach that can effectively identify bottlenecks and direct future studies for perceptually realistic graphics.Item Physically Based Modeling of Micro-Appearance(University of Bonn, 2023-07-11) Huang, WeizhenThis dissertation addresses the challenges of creating photorealistic images by focusing on generating and rendering microscale details and irregularities, because a lack of such imperfections is usually the key aspect of telling a photograph from a synthetic, computer-generated image. In Chapter 3, we model the fluid flow on soap bubbles, which demonstrate iridescent beauty due to their micrometer-scale thickness. Instead of approximating the variation in film thickness with random noise textures, this work incorporates the underlying mechanics that drive such fluid flow, namely the Navier-Stokes equations, which include factors such as surfactant concentration, Marangoni surface tension, and evaporation. We address challenges such as the singularity at poles in spherical coordinates and the need for extremely small step sizes in a stiff system to simulate a wide range of dynamic effects. As a result, our approach produces soap bubble renderings that match real-world footage. Chapter 4 explores hair rendering. Existing models based on the Marschner model split the scattering function into a longitudinal and an azimuthal component. While this separation benefits importance sampling, it lacks a physical ground and does not match measurements. We propose a novel physically based hair scattering model, representing hair as cylinders with microfacet roughness. We reveal that the focused highlight in the forward-scattering direction observed in the measurement is a result of the rough cylindrical geometry itself. Additionally, our model naturally extends to elliptical hair fibers. A much related topic, feather rendering, is discussed in Chapter 5. Unlike human hairs, feathers possess unique substructures, such as barbs and barbules with irregular cross-sections, the existing pipeline of modeling feathers using hair shaders therefore fails to accurately describe their appearance. We propose a model that directly accounts for these multi-scale geometries by representing feathers as collections of barb primitives and incorporating the contributions of barbule cross-sections using a normal distribution function. We demonstrate the effectiveness of our model on rock dove neck feathers, showing close alignment with measurements and photographs.Item Point Based Representations for Novel View Synthesis(2023-11-03) Georgios KopanasThe primary goal of inverse rendering is to recover 3D information from a set of 2D observations, usually a set of images or videos. Observing a 3D scene from different viewpoints can provide rich information about the underlying geometry, materials, and physical properties of the objects. Having access to this information allows many downstream applications. In this dissertation, we will focus on free-viewpoint navigation and novel view synthesis, which is the task of re-rendering the captured 3D scenes from unobserved viewpoints.The field has gained incredible momentum after the introduction of Neural Radiance Fields or NeRFs. While NeRFs achieve exceptional image quality on novel view synthesis it is not the only reason they managed to attract high engagement with the community. Another important property is the simplicity of the optimization, since they frame the problem of 3D reconstruction as a continuous optimization problem over the parameters of a scene representation with a simple photometric objective function.In this thesis, we will keep these two advantages of NeRFs and propose points as new way to represent Radiance Fields that not only achieves state-of-the-art results on image quality but also achieves real-time rendering at over 100 frames per second. Our solution also offers fast training with tractable memory footprint, and is easily integrated into graphics engines.In a traditional image-based rendering context, we propose a point-based representation with a differentiable rasterization pipeline, that optimizes geometry and appearance to achieve high visual quality for novel view synthesis. Next, we use points to tackle highly reflective curved objects -- arguably one of the hardest cases of novel view synthesis-- by learning the trajectory of reflections. In our latest work we show for the first time that points, augmented to become anisotropic 3D Gaussians, can maintain the differentiable properties of NeRFs but also manage to recover higher frequency signals and represent empty space more efficiently. At the same time, their Lagrangian nature and the fact that, in the most recent method, we manage to omit the use of Neural Networks allows us to have an explicit and interpretable geometry and appearance representation.Finally, in this dissertation, we will also briefly touch two other interesting topics. The first topic is how to place cameras to efficiently capture a scene for the purpose of 3D reconstruction. In complicated non-object-centric environments, we provide theoretical and practical intuition on how to place cameras that allow good reconstruction.The second topic is motivated by the recent success of generative models: we study how we can utilize point-based representations with diffusion models. Current methods impose extreme limitations on the number of points. In exploratory work, we suggest an architecture that leverages multi-view information as a tool to de-correlate the number of points with the speed and performance of the model.We conclude this dissertation by reflecting on the work performed throughout the thesis and sketching some exciting directions for future work.