dc.contributor.author | Zhao, Fuwei | |
dc.contributor.author | Xie, Zhenyu | |
dc.contributor.author | Kampffmeyer, Michael | |
dc.contributor.author | Dong, Haoye | |
dc.contributor.author | Han, Songfang | |
dc.contributor.author | Zheng, Tianxiang | |
dc.contributor.author | Zhang, Tao | |
dc.contributor.author | Liang, Xiaodan | |
dc.date.accessioned | 2022-03-28T12:32:11Z | |
dc.date.available | 2022-03-28T12:32:11Z | |
dc.date.issued | 2022-02-28 | |
dc.description.abstract | Virtual 3D try-on can provide an intuitive and realistic view for online shopping and has a huge potential commercial value. However, existing 3D virtual try-on methods mainly rely on annotated 3D human shapes and garment templates, which hinders their applications in practical scenarios. 2D virtual try-on approaches provide a faster alternative to manipulate clothed humans, but lack the rich and realistic 3D representation. In this paper, we propose a novel Monocular-to-3D Virtual Try-On Network (M3D-VTON) that builds on the merits of both 2D and 3D approaches. By integrating 2D information efficiently and learning a mapping that lifts the 2D representation to 3D, we make the first attempt to reconstruct a 3D try-on mesh only taking the target clothing and a person image as inputs. The proposed M3D-VTON includes three modules: 1) The Monocular Prediction Module (MPM) that estimates an initial full-body depth map and accomplishes 2D clothes-person alignment through a novel two-stage warping procedure; 2) The Depth Refinement Module (DRM) that refines the initial body depth to produce more detailed pleat and face characteristics; 3) The Texture Fusion Module (TFM) that fuses the warped clothing with the non-target body part to refine the results. We also construct a high-quality synthesized Monocular-to-3D virtual try-on dataset, in which each person image is associated with a front and a back depth map. Extensive experiments demonstrate that the proposed M3D-VTON can manipulate and reconstruct the 3D human body wearing the given clothing with compelling details and is more efficient than other 3D approaches. | en_US |
dc.description | © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | en_US |
dc.identifier.citation | Zhao, Xie, Kampffmeyer MC, Dong, Han, Zheng, Zhang T, Liang X. M3D-VTON: A Monocular-to-3D Virtual Try-On Network. IEEE International Conference on Computer Vision (ICCV). 2021 | en_US |
dc.identifier.cristinID | FRIDAID 1941703 | |
dc.identifier.doi | 10.1109/ICCV48922.2021.01299 | |
dc.identifier.issn | 1550-5499 | |
dc.identifier.issn | 2380-7504 | |
dc.identifier.uri | https://hdl.handle.net/10037/24603 | |
dc.language.iso | eng | en_US |
dc.publisher | IEEE | en_US |
dc.relation.journal | IEEE International Conference on Computer Vision (ICCV) | |
dc.relation.projectID | Norges forskningsråd: 315029 | en_US |
dc.relation.projectID | Norges forskningsråd: 303514 | en_US |
dc.relation.projectID | Norges forskningsråd: 309439 | en_US |
dc.rights.accessRights | openAccess | en_US |
dc.rights.holder | Copyright 2021 The Author(s) | en_US |
dc.title | M3D-VTON: A Monocular-to-3D Virtual Try-On Network | en_US |
dc.type.version | acceptedVersion | en_US |
dc.type | Journal article | en_US |
dc.type | Tidsskriftartikkel | en_US |
dc.type | Peer reviewed | en_US |