Pu, JunchengLiu, LiFu, XiaodongSu, ZhuoLiu, LijunPeng, WeiHauser, Helwig and Alliez, Pierre2023-10-062023-10-0620231467-8659https://doi.org/10.1111/cgf.14792https://diglib.eg.org:443/handle/10.1111/cgf14792Detailed and accurate feature representation is essential for high‐resolution reconstruction of clothed human. Herein we introduce a unified feature representation for clothed human reconstruction, which can adapt to changeable posture and various clothing details. The whole method can be divided into two parts: the human shape feature representation and the details feature representation. Specifically, we firstly combine the voxel feature learned from semantic voxel with the pixel feature from input image as an implicit representation for human shape. Then, the details feature mixed with the clothed layer feature and the normal feature is used to guide the multi‐layer perceptron to capture geometric surface details. The key difference from existing methods is that we use the clothing semantics to infer clothed layer information, and further restore the layer details with geometric height. We qualitative and quantitative experience results demonstrate that proposed method outperforms existing methods in terms of handling limb swing and clothing details. Our method provides a new solution for clothed human reconstruction with high‐resolution details (style, wrinkles and clothed layers), and has good potential in three‐dimensional virtual try‐on and digital characters.modelling; geometric modellingmodelling; implicit surfacesmodelling; surface reconstructionFeature Representation for High‐resolution Clothed Human Reconstruction10.1111/cgf.14792