WP2. Simulation algorithms for avatar-garment interactions

WP2 focuses on the selection of the 3D garments and the means required to “dress” the avatars with these garments. Additionally, a machine learning model will be built that will approximate the simulation of the garments in various poses.

Description of work

T2.1 – Generate the 3D garment collection (M5-M18)

In this task, the selection of the appropriate 3D garments of ODLO’s collection will be made. Overall, the selection will be based on the user requirements defined in T6.1 while simultaneously aiming at wide variety in terms of fabric properties. Moreover, in case this is necessary, modifications to the existing garments will be made in order to meet the user requirements and also clasp well with the virtual Models.

T2.2 – Simulate the avatar-garment interactions in professional software (M5-M18)

The work in this task will be two-fold. First of all, ODLO will manually simulate the garments on a smaller number of poses. There are some manual skills in using the Browzwear software [BROWZ] to pinch and drape the garment, which simulates how the garment would look in a real photoshoot. This output would be used in use cases 1 (fashion designer app) and 2 (social app). The second part of this task will involve Metail creating a way to feed different shapes and poses into Browzwear for automatic simulation to produce ground truth data for T2.4.

T2.3 – Automatically simulate garment-avatar interactions for videos (M5-M24)

In this task, Metail will research video segmentation approaches which will separate a short video into several frames. These frames will be used as input to generate an avatar with a series of poses matching each of these frames. This avatar will be then imported to Browzwear for garment simulation. Once the frames have been dressed and styled in Browzwear, the frames will be recombined to produce the same video with the person wearing digital garments.

T2.4 – Create a machine learning model to predict a game engine simulation approximation of the physics model (M5-M24)

The research in this area is quite evolved and there are a few possible approaches which will have to be investigated. In principle, Metail will start with frames of video of digital garments exported as OBJs and will output a model which for a given pose will be able to generate how the garment will drape. The approach will either by based on a 4D-based ML approach similar to Skinned Multi-Person Linear Model (SMPL) (Loper et al., 2015) but based on Metail proprietary technology, or by rigging and posing the garments and using this input combined with the ground truth data to model the output garment shape.