Vis enkel innførsel

dc.contributor.authorXie, Zhenyu
dc.contributor.authorHuang, Zaiyu
dc.contributor.authorZhao, Fuwei
dc.contributor.authorDong, Haoye
dc.contributor.authorKampffmeyer, Michael
dc.contributor.authorLiang, Xiaodan
dc.date.accessioned2022-03-29T08:52:44Z
dc.date.available2022-03-29T08:52:44Z
dc.date.issued2021
dc.description.abstractImage-based virtual try-on is one of the most promising applications of human-centric image generation due to its tremendous real-world potential. Yet, as most try-on approaches fit in-shop garments onto a target person, they require the laborious and restrictive construction of a paired training dataset, severely limiting their scalability. While a few recent works attempt to transfer garments directly from one person to another, alleviating the need to collect paired datasets, their performance is impacted by the lack of paired (supervised) information. In particular, disentangling style and spatial information of the garment becomes a challenge, which existing methods either address by requiring auxiliary data or extensive online optimization procedures, thereby still inhibiting their scalability. To achieve a scalable virtual try-on system that can transfer arbitrary garments between a source and a target person in an unsupervised manner, we thus propose a texture-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive GAN (PASTA-GAN), that facilitates real-world unpaired virtual try-on. Specifically, to disentangle the style and spatial information of each garment, PASTA-GAN consists of an innovative patch-routed disentanglement module for successfully retaining garment texture and shape characteristics. Guided by the source person's keypoints, the patch-routed disentanglement module first decouples garments into normalized patches, thus eliminating the inherent spatial information of the garment, and then reconstructs the normalized patches to the warped garment complying with the target person pose. Given the warped garment, PASTA-GAN further introduces novel spatially-adaptive residual blocks that guide the generator to synthesize more realistic garment details. Extensive comparisons with paired and unpaired approaches demonstrate the superiority of PASTA-GAN, highlighting its ability to generate high-quality try-on images when faced with a large variety of garments(e.g. vests, shirts, pants), taking a crucial step towards real-world scalable try-on.en_US
dc.descriptionSource at <a href=https://proceedings.neurips.cc/paper/2021/hash/151de84cca69258b17375e2f44239191-Abstract.html>https://proceedings.neurips.cc/paper/2021/hash/151de84cca69258b17375e2f44239191-Abstract.html</a>.en_US
dc.identifier.citationXie, Huang, Zhao, Dong, Kampffmeyer MC, Liang X. Towards Scalable Unpaired Virtual Try-On via Patch-Routed Spatially-Adaptive GAN. Advances in Neural Information Processing Systems. 2021en_US
dc.identifier.cristinIDFRIDAID 1941710
dc.identifier.issn1049-5258
dc.identifier.urihttps://hdl.handle.net/10037/24616
dc.language.isoengen_US
dc.publisherNeural Information Processing Systems Foundationen_US
dc.relation.journalAdvances in Neural Information Processing Systems
dc.relation.projectIDNorges forskningsråd: 315029en_US
dc.relation.projectIDNorges forskningsråd: 303514en_US
dc.relation.projectIDNorges forskningsråd: 309439en_US
dc.rights.accessRightsopenAccessen_US
dc.rights.holderCopyright 2021 The Author(s)en_US
dc.titleTowards Scalable Unpaired Virtual Try-On via Patch-Routed Spatially-Adaptive GANen_US
dc.type.versionpublishedVersionen_US
dc.typeJournal articleen_US
dc.typeTidsskriftartikkelen_US
dc.typePeer revieweden_US


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel