Tan, YudiGuan, BoliangZhou, FanSu, ZhuoHauser, Helwig and Alliez, Pierre2023-10-062023-10-0620231467-8659https://doi.org/10.1111/cgf.14798https://diglib.eg.org:443/handle/10.1111/cgf14798Clothed human re‐construction from a monocular image is challenging due to occlusion, depth‐ambiguity and variations of body poses. Recently, shape representation based on an implicit function, compared to explicit representation such as mesh and voxel, is more capable with complex topology of clothed human. This is mainly achieved by using pixel‐aligned features, facilitating implicit function to capture local details. But such methods utilize an identical feature map for all sampled points to get local features, making their models occlusion‐agnostic in the encoding stage. The decoder, as implicit function, only maps features and does not take occlusion into account explicitly. Thus, these methods fail to generalize well in poses with severe self‐occlusion. To address this, we present OaIF to encode local features conditioned in visibility of SMPL vertices. OaIF projects SMPL vertices onto image plane to obtain image features masked by visibility. Vertices features integrated with geometry information of mesh are then feed into a GAT network to encode jointly. We query hybrid features and occlusion factors for points through cross attention and learn occupancy fields for clothed human. The experiments demonstrate that OaIF achieves more robust and accurate re‐construction than the state of the art on both public datasets and wild images.modellingimage‐based modellingimplicit surfacesOaIF: Occlusion‐Aware Implicit Function for Clothed Human Re‐construction10.1111/cgf.14798