Chao Wang2025-11-042025-11-042025-07-04https://diglib.eg.org/handle/10.2312/3607242High Dynamic Range (HDR) images offer significant advantages over Low Dynamic Range (LDR) images, including greater bit depth, a wider color gamut, and a higher dynamic range. These features not only provide users with an enhanced visual experience but also facilitate post-production processes in photography and filmmaking. Despite the considerable advancements in HDR technology over the years, significant challenges persist in the acquisition and display of HDR content. This thesis systematically explores the potential of leveraging deep learning techniques combined with physical prior knowledge to address these challenges. First, it investigates how implicit neural representations can be utilized to reconstruct all-in-focus HDR images from sparse, defocused LDR inputs, enabling flexible refocusing and re-exposure. Additionally, it extends the scope to the 3D domain by employing 3D Gaussian Splatting to reconstruct HDR all-in-focus fields from multi-view LDR defocused images, supporting novel view synthesis with refocusing and re-exposure capabilities. Expanding further, the thesis investigates strategies for generating HDR content from the in-the-wild LDR data or limited HDR datasets, and subsequently utilizes the resulting HDR generative models as priors to enable the transformation of LDR images into HDR. Finally, it proposes a feature contrast masking loss inspired by visual masking theory, enabling a self-supervised learning tone mapper to display the HDR content on LDR devices.enDeep High Dynamic Range Imaging: Reconstruction, Generation and Display