State-of-the-Art in the Architecture, Methods and Applications of StyleGAN

dc.contributor.authorBermano, Amit Haimen_US
dc.contributor.authorGal, Rinonen_US
dc.contributor.authorAlaluf, Yuvalen_US
dc.contributor.authorMokady, Ronen_US
dc.contributor.authorNitzan, Yotamen_US
dc.contributor.authorTov, Omeren_US
dc.contributor.authorPatashnik, Oren_US
dc.contributor.authorCohen-Or, Danielen_US
dc.contributor.editorMeneveaux, Danielen_US
dc.contributor.editorPatanè, Giuseppeen_US
dc.date.accessioned2022-04-22T07:00:32Z
dc.date.available2022-04-22T07:00:32Z
dc.date.issued2022
dc.description.abstractGenerative Adversarial Networks (GANs) have established themselves as a prevalent approach to image synthesis. Of these, StyleGAN offers a fascinating case study, owing to its remarkable visual quality and an ability to support a large array of downstream tasks. This state-of-the-art report covers the StyleGAN architecture, and the ways it has been employed since its conception, while also analyzing its severe limitations. It aims to be of use for both newcomers, who wish to get a grasp of the field, and for more experienced readers that might benefit from seeing current research trends and existing tools laid out. Among StyleGAN's most interesting aspects is its learned latent space. Despite being learned with no supervision, it is surprisingly well-behaved and remarkably disentangled. Combined with StyleGAN's visual quality, these properties gave rise to unparalleled editing capabilities. However, the control offered by StyleGAN is inherently limited to the generator's learned distribution, and can only be applied to images generated by StyleGAN itself. Seeking to bring StyleGAN's latent control to real-world scenarios, the study of GAN inversion and latent space embedding has quickly gained in popularity. Meanwhile, this same study has helped shed light on the inner workings and limitations of StyleGAN. We map out StyleGAN's impressive story through these investigations, and discuss the details that have made StyleGAN the go-to generator. We further elaborate on the visual priors StyleGAN constructs, and discuss their use in downstream discriminative tasks. Looking forward, we point out StyleGAN's limitations and speculate on current trends and promising directions for future research, such as task and target specific fine-tuning.en_US
dc.description.documenttypestar
dc.description.number2
dc.description.sectionheadersState of the Art Reports
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume41
dc.identifier.doi10.1111/cgf.14503
dc.identifier.issn1467-8659
dc.identifier.pages591-611
dc.identifier.pages21 pages
dc.identifier.urihttps://doi.org/10.1111/cgf.14503
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14503
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectCCS Concepts: Computing methodologies --> Learning latent representations; Image manipulation; Computer graphics; Neural networks
dc.subjectComputing methodologies
dc.subjectLearning latent representations
dc.subjectImage manipulation
dc.subjectComputer graphics
dc.subjectNeural networks
dc.titleState-of-the-Art in the Architecture, Methods and Applications of StyleGANen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
v41i2pp591-611.pdf
Size:
23.19 MB
Format:
Adobe Portable Document Format