Huang, TaoZhou, YangLin, DaqiZhu, JunqiuYan, Ling-QiWu, KuiWang, BeibeiWilkie, Alexander2025-06-202025-06-2020251467-8659https://doi.org/10.1111/cgf.70181https://diglib.eg.org/handle/10.1111/cgf70181We present a real-time strand-based rendering framework that ensures seamless transitions between different level-of-detail (LoD) while maintaining a consistent appearance. We first introduce an aggregated BCSDF model to accurately capture both single and multiple scattering within the cluster for hairs and fibers. Building upon this, we further introduce a LoD framework for hair rendering that dynamically, adaptively, and independently replaces clusters of individual hairs with thick strands based on their projected screen widths. Through tests on diverse hairstyles with various hair colors and animation, as well as knit patches, our framework closely replicates the appearance of multiple-scattered full geometries at various viewing distances, achieving up to a 13× speedup.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → RenderingComputing methodologies → RenderingReal-time Level-of-detail Strand-based Rendering10.1111/cgf.7018113 pages