Show simple item record

dc.contributor.authorBaek, Seung Youpen_US
dc.contributor.authorLee, Sungkilen_US
dc.contributor.editorLee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, Burkharden_US
dc.date.accessioned2020-10-29T18:39:40Z
dc.date.available2020-10-29T18:39:40Z
dc.date.issued2020
dc.identifier.isbn978-3-03868-120-5
dc.identifier.urihttps://doi.org/10.2312/pg.20201231
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/pg20201231
dc.description.abstractWe present a semi-automated framework that translates day-time domain road scene images to those for the night-time domain. Unlike recent studies based on the Generative Adversarial Networks (GANs), we avoid learning for the translation without random failures. Our framework uses semantic annotation to extract scene elements, perceives a scene structure/depth, and applies per-element translation. Experimental results demonstrate that our framework can synthesize higher-resolution results without artifacts in the translation.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectComputing methodologies
dc.subjectComputational photography
dc.subjectImage processing
dc.titleDay-to-Night Road Scene Image Translation Using Semantic Segmentationen_US
dc.description.seriesinformationPacific Graphics Short Papers, Posters, and Work-in-Progress Papers
dc.description.sectionheadersPosters
dc.identifier.doi10.2312/pg.20201231
dc.identifier.pages47-48


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record