Einabadi, FarshadGuillemaut, Jean‐YvesHilton, AdrianBenes, Bedrich and Hauser, Helwig2021-10-082021-10-0820211467-8659https://doi.org/10.1111/cgf.14283https://diglib.eg.org:443/handle/10.1111/cgf14283Scene relighting and estimating illumination of a real scene for insertion of virtual objects in a mixed‐reality scenario are well‐studied challenges in the computer vision and graphics fields. Classical inverse rendering approaches aim to decompose a scene into its orthogonal constituting elements, namely scene geometry, illumination and surface materials, which can later be used for augmented reality or to render new images under novel lighting or viewpoints. Recently, the application of deep neural computing to illumination estimation, relighting and inverse rendering has shown promising results. This contribution aims to bring together in a coherent manner current advances in this conjunction. We examine in detail the attributes of the proposed approaches, presented in three categories: scene illumination estimation, relighting with reflectance‐aware scene‐specific representations and finally relighting as image‐to‐image transformations. Each category is concluded with a discussion on the main characteristics of the current methods and possible future trends. We also provide an overview of current publicly available datasets for neural lighting applications.neural relightingneural illumination estimationDeep Neural Models for Illumination Estimation and Relighting: A Survey10.1111/cgf.14283315-331