40-Issue 4

Permanent URI for this collection

Saarbrücken, Germany & Virtual | 29 June – 2 July 2021
(Rendering - DL only track is available here.)
Denoising
Deep Compositional Denoising for High-quality Monte Carlo Rendering
Xianyao Zhang, Marco Manzi, Thijs Vogels, Henrik Dahlberg, Markus Gross, and Marios Papas
Real-time Monte Carlo Denoising with Weight Sharing Kernel Prediction Network
Hangming Fan, Rui Wang, Yuchi Huo, and Hujun Bao
Neural Rendering
Point-Based Neural Rendering with Per-View Optimization
Georgios Kopanas, Julien Philip, Thomas LeimkĂĽhler, and George Drettakis
DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks
Thomas Neff, Pascal Stadlbauer, Mathias Parger, Andreas Kurz, Joerg H. Mueller, Chakravarty R. Alla Chaitanya, Anton S. Kaplanyan, and Markus Steinberger
Integration
Q-NET: A Network for Low-dimensional Integrals of Neural Proxies
Kartic Subr
Image and Video Editing
Video-Based Rendering of Dynamic Stationary Environments from Unsynchronized Inputs
Theo Thonat, Yagiz Aksoy, Miika Aittala, Sylvain Paris, Fredo Durand, and George Drettakis
PosterChild: Blend-Aware Artistic Posterization
Cheng-Kang Chao, Karan Singh, and Yotam Gingold
Differentiable Rendering
Unified Shape and SVBRDF Recovery using Differentiable Monte Carlo Rendering
Fujun Luan, Shuang Zhao, Kavita Bala, and Zhao Dong
High Performance Rendering
Rendering Point Clouds with Compute Shaders and Vertex Order Optimization
Markus SchĂĽtz, Bernhard Kerbl, and Michael Wimmer
Moving Basis Decomposition for Precomputed Light Transport
Ari Silvennoinen and Peter-Pike Sloan
Path Tracing, Monte Carlo Rendering
Optimised Path Space Regularisation
Philippe Weier, Marc Droske, Johannes Hanika, Andrea Weidlich, and JirĂ­ Vorba
Material Models
An Analytic BRDF for Materials with Spherical Lambertian Scatterers
Eugene d'Eon
A Combined Scattering and Diffraction Model for Elliptical Hair Rendering
Alexis Benamira and Sumanta Pattanaik
Faces and Body
Deep Portrait Lighting Enhancement with 3D Guidance
Fangzhou Han, Can Wang, Hao Du, and Jing Liao

BibTeX (40-Issue 4)
                
@article{
10.1111:cgf.14337,
journal = {Computer Graphics Forum}, title = {{
Deep Compositional Denoising for High-quality Monte Carlo Rendering}},
author = {
Zhang, Xianyao
 and
Manzi, Marco
 and
Vogels, Thijs
 and
Dahlberg, Henrik
 and
Gross, Markus
 and
Papas, Marios
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14337}
}
                
@article{
10.1111:cgf.14338,
journal = {Computer Graphics Forum}, title = {{
Real-time Monte Carlo Denoising with Weight Sharing Kernel Prediction Network}},
author = {
Fan, Hangming
 and
Wang, Rui
 and
Huo, Yuchi
 and
Bao, Hujun
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14338}
}
                
@article{
10.1111:cgf.14339,
journal = {Computer Graphics Forum}, title = {{
Point-Based Neural Rendering with Per-View Optimization}},
author = {
Kopanas, Georgios
 and
Philip, Julien
 and
LeimkĂĽhler, Thomas
 and
Drettakis, George
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14339}
}
                
@article{
10.1111:cgf.14340,
journal = {Computer Graphics Forum}, title = {{
DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks}},
author = {
Neff, Thomas
 and
Stadlbauer, Pascal
 and
Parger, Mathias
 and
Kurz, Andreas
 and
Mueller, Joerg H.
 and
Chaitanya, Chakravarty R. Alla
 and
Kaplanyan, Anton S.
 and
Steinberger, Markus
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14340}
}
                
@article{
10.1111:cgf.14341,
journal = {Computer Graphics Forum}, title = {{
Q-NET: A Network for Low-dimensional Integrals of Neural Proxies}},
author = {
Subr, Kartic
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14341}
}
                
@article{
10.1111:cgf.14342,
journal = {Computer Graphics Forum}, title = {{
Video-Based Rendering of Dynamic Stationary Environments from Unsynchronized Inputs}},
author = {
Thonat, Theo
 and
Aksoy, Yagiz
 and
Aittala, Miika
 and
Paris, Sylvain
 and
Durand, Fredo
 and
Drettakis, George
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14342}
}
                
@article{
10.1111:cgf.14343,
journal = {Computer Graphics Forum}, title = {{
PosterChild: Blend-Aware Artistic Posterization}},
author = {
Chao, Cheng-Kang
 and
Singh, Karan
 and
Gingold, Yotam
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14343}
}
                
@article{
10.1111:cgf.14344,
journal = {Computer Graphics Forum}, title = {{
Unified Shape and SVBRDF Recovery using Differentiable Monte Carlo Rendering}},
author = {
Luan, Fujun
 and
Zhao, Shuang
 and
Bala, Kavita
 and
Dong, Zhao
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14344}
}
                
@article{
10.1111:cgf.14345,
journal = {Computer Graphics Forum}, title = {{
Rendering Point Clouds with Compute Shaders and Vertex Order Optimization}},
author = {
SchĂĽtz, Markus
 and
Kerbl, Bernhard
 and
Wimmer, Michael
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14345}
}
                
@article{
10.1111:cgf.14346,
journal = {Computer Graphics Forum}, title = {{
Moving Basis Decomposition for Precomputed Light Transport}},
author = {
Silvennoinen, Ari
 and
Sloan, Peter-Pike
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14346}
}
                
@article{
10.1111:cgf.14348,
journal = {Computer Graphics Forum}, title = {{
An Analytic BRDF for Materials with Spherical Lambertian Scatterers}},
author = {
d'Eon, Eugene
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14348}
}
                
@article{
10.1111:cgf.14347,
journal = {Computer Graphics Forum}, title = {{
Optimised Path Space Regularisation}},
author = {
Weier, Philippe
 and
Droske, Marc
 and
Hanika, Johannes
 and
Weidlich, Andrea
 and
Vorba, JirĂ­
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14347}
}
                
@article{
10.1111:cgf.14349,
journal = {Computer Graphics Forum}, title = {{
A Combined Scattering and Diffraction Model for Elliptical Hair Rendering}},
author = {
Benamira, Alexis
 and
Pattanaik, Sumanta
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14349}
}
                
@article{
10.1111:cgf.14350,
journal = {Computer Graphics Forum}, title = {{
Deep Portrait Lighting Enhancement with 3D Guidance}},
author = {
Han, Fangzhou
 and
Wang, Can
 and
Du, Hao
 and
Liao, Jing
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14350}
}

Browse

Recent Submissions

Now showing 1 - 15 of 15
  • Item
    Rendering 2021 CGF 40-4: Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Bousseau, Adrien; McGuire, Morgan; Bousseau, Adrien and McGuire, Morgan
  • Item
    Deep Compositional Denoising for High-quality Monte Carlo Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Zhang, Xianyao; Manzi, Marco; Vogels, Thijs; Dahlberg, Henrik; Gross, Markus; Papas, Marios; Bousseau, Adrien and McGuire, Morgan
    We propose a deep-learning method for automatically decomposing noisy Monte Carlo renderings into components that kernelpredicting denoisers can denoise more effectively. In our model, a neural decomposition module learns to predict noisy components and corresponding feature maps, which are consecutively reconstructed by a denoising module. The components are predicted based on statistics aggregated at the pixel level by the renderer. Denoising these components individually allows the use of per-component kernels that adapt to each component's noisy signal characteristics. Experimentally, we show that the proposed decomposition module consistently improves the denoising quality of current state-of-the-art kernel-predicting denoisers on large-scale academic and production datasets.
  • Item
    Real-time Monte Carlo Denoising with Weight Sharing Kernel Prediction Network
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Fan, Hangming; Wang, Rui; Huo, Yuchi; Bao, Hujun; Bousseau, Adrien and McGuire, Morgan
    Real-time Monte Carlo denoising aims at removing severe noise under low samples per pixel (spp) in a strict time budget. Recently, kernel-prediction methods use a neural network to predict each pixel's filtering kernel and have shown a great potential to remove Monte Carlo noise. However, the heavy computation overhead blocks these methods from real-time applications. This paper expands the kernel-prediction method and proposes a novel approach to denoise very low spp (e.g., 1-spp) Monte Carlo path traced images at real-time frame rates. Instead of using the neural network to directly predict the kernel map, i.e., the complete weights of each per-pixel filtering kernel, we predict an encoding of the kernel map, followed by a high-efficiency decoder with unfolding operations for a high-quality reconstruction of the filtering kernels . The kernel map encoding yields a compact single-channel representation of the kernel map, which can significantly reduce the kernel-prediction network's throughput. In addition, we adopt a scalable kernel fusion module to improve denoising quality. The proposed approach preserves kernel prediction methods' denoising quality while roughly halving its denoising time for 1-spp noisy inputs. In addition, compared with the recent neural bilateral grid-based real-time denoiser, our approach benefits from the high parallelism of kernel-based reconstruction and produces better denoising results at equal time.
  • Item
    Point-Based Neural Rendering with Per-View Optimization
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Kopanas, Georgios; Philip, Julien; LeimkĂĽhler, Thomas; Drettakis, George; Bousseau, Adrien and McGuire, Morgan
    There has recently been great interest in neural rendering methods. Some approaches use 3D geometry reconstructed with Multi-View Stereo (MVS) but cannot recover from the errors of this process, while others directly learn a volumetric neural representation, but suffer from expensive training and inference. We introduce a general approach that is initialized with MVS, but allows further optimization of scene properties in the space of input views, including depth and reprojected features, resulting in improved novel-view synthesis. A key element of our approach is our new differentiable point-based pipeline, based on bi-directional Elliptical Weighted Average splatting, a probabilistic depth test and effective camera selection. We use these elements together in our neural renderer, that outperforms all previous methods both in quality and speed in almost all scenes we tested. Our pipeline can be applied to multi-view harmonization and stylization in addition to novel-view synthesis.
  • Item
    DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Neff, Thomas; Stadlbauer, Pascal; Parger, Mathias; Kurz, Andreas; Mueller, Joerg H.; Chaitanya, Chakravarty R. Alla; Kaplanyan, Anton S.; Steinberger, Markus; Bousseau, Adrien and McGuire, Morgan
    The recent research explosion around implicit neural representations, such as NeRF, shows that there is immense potential for implicitly storing high-quality scene and lighting information in compact neural networks. However, one major limitation preventing the use of NeRF in real-time rendering applications is the prohibitive computational cost of excessive network evaluations along each view ray, requiring dozens of petaFLOPS. In this work, we bring compact neural representations closer to practical rendering of synthetic content in real-time applications, such as games and virtual reality. We show that the number of samples required for each view ray can be significantly reduced when samples are placed around surfaces in the scene without compromising image quality. To this end, we propose a depth oracle network that predicts ray sample locations for each view ray with a single network evaluation. We show that using a classification network around logarithmically discretized and spherically warped depth values is essential to encode surface locations rather than directly estimating depth. The combination of these techniques leads to DONeRF, our compact dual network design with a depth oracle network as its first step and a locally sampled shading network for ray accumulation. With DONeRF, we reduce the inference costs by up to 48x compared to NeRF when conditioning on available ground truth depth information. Compared to concurrent acceleration methods for raymarching-based neural representations, DONeRF does not require additional memory for explicit caching or acceleration structures, and can render interactively (20 frames per second) on a single GPU.
  • Item
    Q-NET: A Network for Low-dimensional Integrals of Neural Proxies
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Subr, Kartic; Bousseau, Adrien and McGuire, Morgan
    Integrals of multidimensional functions are often estimated by averaging function values at multiple locations. The use of an approximate surrogate or proxy for the true function is useful if repeated evaluations are necessary. A proxy is even more useful if its own integral is known analytically and can be calculated practically. We design a family of fixed networks, which we call Q-NETs, that can calculate integrals of functions represented by sigmoidal universal approximators. Q-NETs operate on the parameters of the trained proxy and can calculate exact integrals over any subset of dimensions of the input domain. Q-NETs also facilitate convenient recalculation of integrals without resampling the integrand or retraining the proxy, under certain transformations to the input space. We highlight the benefits of this scheme for diverse rendering applications including inverse rendering, sampled procedural noise and visualization. Q-NETs are appealing in the following contexts: the dimensionality is low (< 10D); integrals of a sampled function need to be recalculated over different sub-domains; the estimation of integrals needs to be decoupled from the sampling strategy such as when sparse, adaptive sampling is used; marginal functions need to be known in functional form; or when powerful Single Instruction Multiple Data/Thread (SIMD/SIMT) pipelines are available.
  • Item
    Video-Based Rendering of Dynamic Stationary Environments from Unsynchronized Inputs
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Thonat, Theo; Aksoy, Yagiz; Aittala, Miika; Paris, Sylvain; Durand, Fredo; Drettakis, George; Bousseau, Adrien and McGuire, Morgan
    Image-Based Rendering allows users to easily capture a scene using a single camera and then navigate freely with realistic results. However, the resulting renderings are completely static, and dynamic effects - such as fire, waterfalls or small waves - cannot be reproduced. We tackle the challenging problem of enabling free-viewpoint navigation including such stationary dynamic effects, but still maintaining the simplicity of casual capture. Using a single camera - instead of previous complex synchronized multi-camera setups - means that we have unsynchronized videos of the dynamic effect from multiple views, making it hard to blend them when synthesizing novel views. We present a solution that allows smooth free-viewpoint video-based rendering (VBR) of such scenes using temporal Laplacian pyramid decomposition video, enabling spatio-temporal blending. For effects such as fire and waterfalls, that are semi-transparent and occupy 3D space, we first estimate their spatial volume. This allows us to create per-video geometries and alpha-matte videos that we can blend using our frequency-dependent method. We also extend Laplacian blending to the temporal dimension to remove additional temporal seams. We show results on scenes containing fire, waterfalls or rippling waves at the seaside, bringing these scenes to life.
  • Item
    PosterChild: Blend-Aware Artistic Posterization
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Chao, Cheng-Kang; Singh, Karan; Gingold, Yotam; Bousseau, Adrien and McGuire, Morgan
    Posterization is an artistic effect which converts continuous images into regions of constant color with smooth boundaries, often with an artistically recolored palette. Artistic posterization is extremely time-consuming and tedious. We introduce a blend-aware algorithm for generating posterized images with palette-based control for artistic recoloring. Our algorithm automatically extracts a palette and then uses multi-label optimization to find blended-color regions in terms of that palette. We smooth boundaries away from image details with frequency-guided median filtering. We evaluate our algorithm with a comparative user study and showcase its ability to produce compelling posterizations of a variety of inputs. Our parameters provide artistic control and enable cohesive, real-time recoloring after posterization pre-processing.
  • Item
    Unified Shape and SVBRDF Recovery using Differentiable Monte Carlo Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Luan, Fujun; Zhao, Shuang; Bala, Kavita; Dong, Zhao; Bousseau, Adrien and McGuire, Morgan
    Reconstructing the shape and appearance of real-world objects using measured 2D images has been a long-standing inverse rendering problem. In this paper, we introduce a new analysis-by-synthesis technique capable of producing high-quality reconstructions through robust coarse-to-fine optimization and physics-based differentiable rendering. Unlike most previous methods that handle geometry and reflectance largely separately, our method unifies the optimization of both by leveraging image gradients with respect to both object reflectance and geometry. To obtain physically accurate gradient estimates, we develop a new GPU-based Monte Carlo differentiable renderer leveraging recent advances in differentiable rendering theory to offer unbiased gradients while enjoying better performance than existing tools like PyTorch3D [RRN*20] and redner [LADL18]. To further improve robustness, we utilize several shape and material priors as well as a coarse-to-fine optimization strategy to reconstruct geometry. Using both synthetic and real input images, we demonstrate that our technique can produce reconstructions with higher quality than previous methods.
  • Item
    Rendering Point Clouds with Compute Shaders and Vertex Order Optimization
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) SchĂĽtz, Markus; Kerbl, Bernhard; Wimmer, Michael; Bousseau, Adrien and McGuire, Morgan
    In this paper, we present several compute-based point cloud rendering approaches that outperform the hardware pipeline by up to an order of magnitude and achieve significantly better frame times than previous compute-based methods. Beyond basic closest-point rendering, we also introduce a fast, high-quality variant to reduce aliasing. We present and evaluate several variants of our proposed methods with different flavors of optimization, in order to ensure their applicability and achieve optimal performance on a range of platforms and architectures with varying support for novel GPU hardware features. During our experiments, the observed peak performance was reached rendering 796 million points (12.7GB) at rates of 62 to 64 frames per second (50 billion points per second, 802GB/s) on an RTX 3090 without the use of level-of-detail structures. We further introduce an optimized vertex order for point clouds to boost the efficiency of GL_POINTS by a factor of 5x in cases where hardware rendering is compulsory. We compare different orderings and show that Morton sorted buffers are faster for some viewpoints, while shuffled vertex buffers are faster in others. In contrast, combining both approaches by first sorting according to Morton-code and shuffling the resulting sequence in batches of 128 points leads to a vertex buffer layout with high rendering performance and low sensitivity to viewpoint changes.
  • Item
    Moving Basis Decomposition for Precomputed Light Transport
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Silvennoinen, Ari; Sloan, Peter-Pike; Bousseau, Adrien and McGuire, Morgan
    We study the problem of efficient representation of potentially high-dimensional, spatially coherent signals in the context of precomputed light transport. We present a basis decomposition framework, Moving Basis Decomposition (MBD), that generalizes many existing basis expansion methods and enables high-performance, seamless reconstruction of compressed data. We develop an algorithm for solving large-scale MBD problems. We evaluate MBD against state-of-the-art in a series of controlled experiments and describe a real-world application, where MBD serves as the backbone of a scalable global illumination system powering multiple, current and upcoming 60Hz AAA-titles running on a wide range of hardware platforms.
  • Item
    An Analytic BRDF for Materials with Spherical Lambertian Scatterers
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) d'Eon, Eugene; Bousseau, Adrien and McGuire, Morgan
    We present a new analytic BRDF for porous materials comprised of spherical Lambertian scatterers. The BRDF has a single parameter: the albedo of the Lambertian particles. The resulting appearance exhibits strong back scattering and saturation effects that height-field-based models such as Oren-Nayar cannot reproduce.
  • Item
    Optimised Path Space Regularisation
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Weier, Philippe; Droske, Marc; Hanika, Johannes; Weidlich, Andrea; Vorba, JirĂ­; Bousseau, Adrien and McGuire, Morgan
    We present Optimised Path Space Regularisation (OPSR), a novel regularisation technique for forward path tracing algorithms. Our regularisation controls the amount of roughness added to materials depending on the type of sampled paths and trades a small error in the estimator for a drastic reduction of variance in difficult paths, including indirectly visible caustics. We formulate the problem as a joint bias-variance minimisation problem and use differentiable rendering to optimise our model. The learnt parameters generalise to a large variety of scenes irrespective of their geometric complexity. The regularisation added to the underlying light transport algorithm naturally allows us to handle the problem of near-specular and glossy path chains robustly. Our method consistently improves the convergence of path tracing estimators, including state-of-the-art path guiding techniques where it enables finding otherwise hard-to-sample paths and thus, in turn, can significantly speed up the learning of guiding distributions.
  • Item
    A Combined Scattering and Diffraction Model for Elliptical Hair Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Benamira, Alexis; Pattanaik, Sumanta; Bousseau, Adrien and McGuire, Morgan
    Realistic hair rendering relies on fiber scattering models. These models are based on either ray tracing or on full wavepropagation through the hair fiber. Ray tracing can model most of the scattering phenomenon observed but misses the important effect of diffraction. Indeed human natural hair specific dimensions and geometry demands for the wave nature of light to be taken into consideration for accurate rendering. However, current full-wave model requires nonpratical, several days precomputation, that needs to be repeated for every change in the hair geometry or color, for appropriate results. We present in this paper a dual hair scattering model which considers the dual aspect of light: as a wave and as a ray. Our model accurately simulates both diffraction and scattering phenomena without requiring any precomputation. Furthermore, it can simulate light transport in hairs of arbitrary elliptical cross-sections. This new dual approach enables our model to significantly improve the appearance of rendered hair and qualitatively match scattering and diffraction effects seen in photos of real hair while adding little computation overhead.
  • Item
    Deep Portrait Lighting Enhancement with 3D Guidance
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Han, Fangzhou; Wang, Can; Du, Hao; Liao, Jing; Bousseau, Adrien and McGuire, Morgan
    Despite recent breakthroughs in deep learning methods for image lighting enhancement, they are inferior when applied to portraits because 3D facial information is ignored in their models. To address this, we present a novel deep learning framework for portrait lighting enhancement based on 3D facial guidance. Our framework consists of two stages. In the first stage, corrected lighting parameters are predicted by a network from the input bad lighting image, with the assistance of a 3D morphable model and a differentiable renderer. Given the predicted lighting parameter, the differentiable renderer renders a face image with corrected shading and texture, which serves as the 3D guidance for learning image lighting enhancement in the second stage. To better exploit the long-range correlations between the input and the guidance, in the second stage, we design an imageto- image translation network with a novel transformer architecture, which automatically produces a lighting-enhanced result. Experimental results on the FFHQ dataset and in-the-wild images show that the proposed method outperforms state-of-the-art methods in terms of both quantitative metrics and visual quality.