A body and garment creation method for an Internet based

A body and garment creation
method for an Internet based
virtual fitting room.
Dimitris Protopsaltou, Christiane Luible,
Marlene Arevalo, Nadia Magnenat-Thalmann
MIRALab CUI, University of Geneva, CH-1211, Switzerland
{protopsaltou, luible, arevalo, thalmann}@miralab.unige.ch
Abstract
In this paper we present a new methodology for producing 3D clothes, with realistic
behavior, providing users with an apposite feel for the garment’s details. The 3D
garments that are produced, using our methodology, originate from 2D CAD
patterns of real garments and correspond to the regular sizes that can be found on a
shop floor.
Our aim is to build a compelling, interactive and highly realistic virtual shop, where
visitors can choose between many different types of garments designs and proceed to
simulate these garments on virtually animated bodies. By merging the approach
often used by the fashion industry, in designing clothes, and our own methodology
for creating dressed virtual humans, we present a new technique providing troublefree and straightforward garment visualization.
The entire process starts from the creation of virtual bodies (either male or female),
using standard measurements, which form the basis for garment modeling. Using
splines, the 2D garment patterns are created and then seemed together around a
virtual human body, providing the initial shape. A simulation is made using the
seemed garment by applying physical parameters based on real fabric properties.
Once the garment has been created, a real time platform, which has been embedded
inside a web browser, is used as an interface to the Internet. We extend the web
interface to add interactivity and the ability to dynamically change textures, clothes,
body measurements, animation sequences and certain features of the virtual
environment.
The whole methodology does not aim to build only a virtual dressing room, where
customers can view garments fitted onto their own virtual bodies but to visualize
made-to-measure clothes, animate them, visualize the cloth behavior and to add
interactivity. The entire Virtual Try-On experience is a process of targeting the
clientele, designing the clothing collection, dressing the virtual models and then
using the web as a virtual shop floor.
Keywords: Virtual Try On, 3D clothes, physical behavior, generic bodies
1. Introduction
The Internet has emerged as a compelling channel for sale of apparel. Online apparel
sales exceeded $1 billion in 1999 and are expected to skyrocket to over $22 billion
by 2004 (Forrester Research) [1]. While a significant number of apparel sold over
the Internet still this number only represents less than 1% of all apparel sold in the
United States and significantly lags Internet penetration in other consumer goods
markets (e.g. books and music). A number of recent studies identified the causes for
consumer hesitancy. Online shoppers were reluctant to purchase apparel online in
1999 because they could not try on the items. Furthermore of particular note is the
consumer’s overwhelming concern with fit and correct sizing, concerns with having
to return garments and the inability to fully evaluate garment (quality, details, etc.)
[2].
Consumers that purchase apparel online today base their purchase and size-selection
decisions mostly on 2D photos of garments and sizing charts. Recognizing the
insufficiency of this customer experience, e-tailers have begun to implement
improved functionalities on their sites. Recently introduced capabilities allow the
customer to view items together, such as a blouse and a skirt, enabling the mix and
match of color/texture combinations, and zoom technology, to give the customer a
feel for garment details. LandsEnd.com [3] uses My Virtual Model, which provides
a virtual mannequin, adjusted to the shopper's proportions. In the same manner,
Nordstrom [4] is using 3D technology from California based 3Dshopping.com,
which offers 360 degree viewing, enabling complete rotation of the apparel item.
Even with these improvements in product presentation, a number of things can go
wrong when the consumer pulls the apparel item out of the box. Although there are a
number of solutions available, the problem of a realistic “Virtual mirror” still
remains one of the main impediments. The most common problems include poor fit,
bad drape, or unpleasant feel while wearing the item, or surprise as to the color of
the garment. Customer dissatisfaction in any of these areas drives returns, a costly
occurrence for e-tailers, and creates lost customer loyalty, an even more costly
proposition.
Our work proposes an implementation of a simple and fast cloth simulation system
that will add to the Virtual Fitting Room the feature of realistic cloth behaviour
something that is missing today from the existing Virtual Try On (VTO) solutions in
the market. Macy's Passport 99 Fashion Show [5] is one of the most high quality
VTO show room that has been created so far. The primary benefit of Macy’s VTO
was to introduce dressed bodies animated in real time on the web. Although the user
interactivity has been of primary focus, the actual content lacks of realism. The
content is highly optimized and although it is optimal for web application, real cloth
behavior is not visible.
Besides realism, modern applications require cloth simulation to accommodate
modern design and visualization processes for which interactivity and real-time
rendering are the key features. We propose an adaptation of Macy’s approach to web
visualization that provides a decent cross-platform and real-time rendering solution.
However this defines major constraints for the underlying simulation methods that
should provide high-quality results in very time-constrained situations, and therefore
with the minimal required computation. We additionally propose a scheme that
brings to the cloth exact fit with the virtual body and physical behavior to fully
evaluate the garment. Our work uses a practical implementation of a simple and fast
cloth simulation system based on implicit integration [6].
Along the evolution of cloth simulation techniques, focus was primarily aimed to
address realism through the accurate reproduction of the mechanical features of
fabric materials. The early models, developed a decade ago, had to accommodate
very limited computational power and display device, and therefore were
geometrical models that were only meant to reproduce the geometrical features of
deforming cloth [7]. Then, real mechanical simulation took over, with accurate cloth
models simulating the main mechanical properties of fabric. While some models,
mostly intended for computer graphics, aimed to simulate complex garments used
for dressing virtual characters [8, 9, 10], other studies focused on the accurate
reproduction of mechanical behavior, using particle systems [11] [12] or finite
elements [13]. Despite all these developments, all these techniques remain
dependent on high computational requirements, limiting their application along the
new trends toward highly interactive and real-time applications brought by the
highly spreading multimedia technologies. While highly accurate methods such as
finite elements are not suitable for such applications, developments are now focusing
toward approximate models, that can render, using minimal computation,
approximate, but realistic results in a very robust way.
This paper is composed of two main sections. The first section is a description of the
research methodology we used to develop the complete chain of processes for the
making of bodies and clothes for a Virtual Fitting Room. The second section is a
case study presenting the Virtual Try On developed by MIRALab.
2. Research Methodology
2.1 Generic Bodies Approach
With virtual bodies we consider mainly two issues. The first issue is that bodies
should correspond to real human body shapes, in order for a user to relate his own
body with the virtual body on the screen. The second issue is that virtual bodies
should have an appropriate 3D representation for the purpose of cloth simulation and
animation.
The first phase of our methodology is based entirely on the generation of human
body models that are immediately animatable by the modification of an existing
reference generic model. This tends to be popular due to the expenses of recovering
3D geometry. Based on adding details or features to an existing generic model, such
approach concerns mainly the individualized shape and visual realism using high
quality textures. We propose the creation of five generic bodies, for each sex. Every
single generic body corresponds to a different standard size: Extra Small, Small,
Medium, Large, Extra Large (plus top model shape).
However the definition of body shape is dependent on many factors, not just simply
the standard sizes. It is a complex application dependent task. General
anthropometric classifications (somatotyping) are based on specific sets of
measurements with specialised instruments (the ordinary tape being just one of
them). Many of these measurements relate to the physical identification of
anatomical landmark. Existing descriptions of body shapes in the specific
application domain (garment design and fitting) are predominantly qualitative (based
on the perception of a human body from experts, e.g. fashion designers, or tailors),
or quantitative, based on empirical relationships between body shapes – patterns
(basic blocks) and garment drape. The primary data used for the character modeling
of the generic bodies were collected from sizing surveys results prepared by the
Hohenstein Institute [14].
Figure 1 – Reference generic body
The reference generic model (Figure 1) is sliced into a set of contours with each
contour corresponding to the set of the measurements as described in the table below
(Table 1). To generate a body shape according to the sizes specified, we warp each
contour to fit the given measurement and then warp the mesh to interpolate the
warped contours. In our approach we take into account only the primary
measurements, however manipulation of the secondary body measurements can be
facilitated following the same methodology described above.
Female Standard Body Sizes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Primary Measurements
Height
Bust girth
Under-bust girth
Waist girth
Hip girth
Inside leg length
Arm length
Neck-base girth
Secondary Measurements
Outside leg length
Back waist length
Back width
Shoulder slope
Shoulder length
Front waist length
Girth 8cm below waist
Waist to Hips
Thigh girth
Head girth
Upper arm length
Upper arm girth
Wrist girth
Neck shoulder point
Knee height
XS/34
S/36
M/38
L/40
XL/42
168
80
71
65
90
78.3
59.6
34.8
168
84
74
68
94
78.3
59.8
35.4
168
88
77
72
97
78.1
60
36
168
92
80
76
100
77.9
60.2
36.6
168
96
84
80
103
77.7
60.4
37.2
106
41.4
33.5
72
12
41.9
81
21
52
55.4
34.8
26.2
15
25.5
45
106
41.4
34.5
74
12.1
42.8
84
21
53.8
55.6
35
26.8
15.4
26.5
45
106
41.6
35.5
76
12.2
43.7
88
21
55.6
55.8
35.2
28
15.8
27.5
45
106
41.8
36.5
78
12.3
44.6
92
21
57.4
56
35.4
29.2
16.2
28.5
45
106
42
37.5
80
12.4
45.5
96
21
59.2
56.2
35.6
34
16.6
29.5
45
Table 1 – Body Measurements of standard female bodies [14]
Figure 3 – Overview of the standard female (left) /male (right)
bodies modeled according to sizes
The standard male bodies are created using the same methodology as in the case of
the female bodies (Figure 3).
2.2
Animation of Generic Bodies
Human Motion Capture [15] techniques necessitate strong competences in Virtual
Human modelling and animation as well as thorough knowledge of hardware and
software pipelines to transform the raw data measured by the hardware into
parameters suited for virtual models (e.g. anatomical angles). We used Optical
Motion Capture based on markers [16,17].
The typical pipeline production for optical motion capture begins with camera
volume calibration, where the relation between all the cameras is computed using
recording of dedicated calibration props (assuming the volume set-up [where the
cameras are properly aimed at the captured volume] has already been achieved).
Then the subject, wearing optical markers (e.g. spheres of 25mm diameter for body
capture), is recorded in a static posture so that the operator can manually label all the
markers. The operator gives/selects names for each marker in this static pose,
usually using anatomically relevant labels (e.g. LKNE for the left knee marker). The
information gathered at this stage will be used by the system to label the trajectories
automatically during post-processing of dynamic movements.
Figure 4 – Technical skeleton
The post-processing of motion sequences primarily involves the following two
stages: trajectory reconstruction and labelling of markers/trajectories. Once those
two steps have been completed, it is possible to visualize the technical skeleton
(Figure 4) obtained by drawing segments between the marker positions. From this
information the subject skeleton and its animation are derived [18].
It is relatively easy to construct the subject skeleton (or its approximation) from the
markers. However, the problem becomes much more complex when considering the
body meshes and its deformation parameters. Skin deformation (or skinning) is
based on three inter dependant ingredients. First the skeleton topology and
geometry, second the mesh or surface of the body, and third the deformation’s own
parameters (very often consisting of vertex-level ratios describing the scope of
deformation with relation to joints). Good skinning results are achieved by a careful
design of these three components, where positions of joints inside the body surface
envelope and attachments ratios are very important issues.
2.3
Cloth Simulation and Animation
For the simulation of clothes we use MIRACloth as the 3D garment simulator that
was developed by the University of Geneva. This engine includes the modules of
mechanical model, collision engine, rendering and animation. The algorithms have
been integrated in a 3D design framework allowing the management of complex
garment objects in interaction with animated virtual characters[19]. This integration
has been carried out in the form of a 3DStudio Max plugin (Figure 5). The cloth
simulation process has two stages:
1.
The garment assembly stage, where the patterns are pulled together and
seamed around the body. This is a draping problem involving to obtain a
rest position of the garment as quickly as possible.
Once the 2D patterns have been placed around the body, a mechanical simulation is
invoked, forcing the patterns to approach along the seaming lines. As a result the
patterns are attached and seamed on the borders as specified and attain a shape
influenced by the form of the body. Thus the garment is constructed around the body
(Figure 5). The seaming process relies on a simplified mechanical simulation, where
the seam elastics pull the matching pattern borders together. Rather than trying to
simulate the exact behavior of fabric, the simplified model optimizes seaming speed
using parameters adapted for that purpose, that is to say no gravity, no friction,
etc.[20]
2.
The garment animation stage, where the motion of the garment is computed
as the body is animated. The dynamical motion of the cloth is important
here.
Figure 5 – 3D simulation of cloth
The mechanical parameters of the actual fabric are set, as well as gravity, in order to
put the cloth into actual mechanical conditions. Animation of a garment here
pertains to its movements along with the virtual body. This is accomplished with
collision response and friction with the body surface. At this stage the mechanical
parameters are set and tuned with visual feed back. The setting of the parameters
may be different from what was used during the process of seaming and construction
garments. The mechanical simulation then gives the animation of the garment on a
virtual body.
The animation parameters, and particularly the mechanical simulation parameters,
are adjusted through the parameters panel. It features two categories: environment
(global parameters) and object (local parameters). On one hand, among the global
simulation parameters, we will find gravity and finally collision distance and
detection modes. On the other hand, the local parameters include elasticity, surface
density, bending rigidity, friction values, Poisson coefficient, as well as viscosity
and non-linear elasticity values, which are the mechanical properties of objects.
At this stage we aim to quantify precisely the mechanical parameters that are relevant
for the perceptual quality of cloth animation for simulated cloth. Starting from the
measured parameters, additional parameters (viscosity, plasticity, non-linearity) are
also investigated in order to match the simulated deformations with the real values
(Table2,3). The experiments involve a 40x40 cm fabric square maintained by the two
corners opposite to the attachment line, and distant by 5 cm from the line (the edge
length), giving an initial curved loop shape to the fabric. The validation of the
experiments was to perform the constrained fall tests and to compare them to reality
using two criteria: the time taken for the fabric to reach vertical position in its initial
motion, and the damping of the residual motion. [20]
PARAMETERS
ELASTICITY
Weft elasticity
Warp elasticity
Shear G
Weft bending
Warp bending
VISCOSITY
Weft elasticity
Warp elasticity
Shear G
Weft bending
Warp bending
DENSITY
Linen
Cotton
Tencel
50 N.m-1
50 N.m-1
55 N.m-1
208.1 10-6 N.m
153.9 10-6 N.m
16.67 N.m-1
16.67 N.m-1
60 N.m-1
10.5 10-6 N.m
6.7 10-6 N.m
25 N.m-1
25 N.m-1
86 N.m-1
25 10-6 N.m
14.8 10-6 N.m
50 10-2 N.m-1.s
50 10-2 N.m-1.s
55 10-2 N.m-1.s
208.1 10-6 N.m.s
153.9 10-6 N.m.s
310 10-3 Kg.m-2
16.67 10-2 N.m-1.s
16.67 10-2 N.m-1.s
60 10-2 N.m-1.s
10.5 10-6 N.m.s
6.7 10-6 N.m.s
310 10-3 Kg.m-2
25 10-2 N.m-1.s
25 10-2 N.m-1.s
86 N.m-1
25 10-6 N.m.s
14.8 10-6 N.m.s
327 10-3 Kg.m-2
Table 2 – Example of fabric parameters1
PARAMETERS
10 m.s-2
GRAVITY
AERODYNAMIC VISCOSITY
ISOTROPIC
(Damping)
NORMAL
(Flowing)
PLASTICITY (f/h)
0.1 N.m-3.s
1 N.m-3.s
1 102 m-1
f = 10 m-1 et h = 0.1
Table 3– External parameters2
Figure 6 –Example of fabric parameters: Linen/Cotton/Tencel
The effects of additional parameters (viscosity, plasticity, non-linearity) are further
investigated through simulation and perceptual assessment. Finally, complete
garments are simulated with the cloth simulation system that integrate the accurate
model found relevant and defined from the previous study.
1
-1
Metric elasticity: measurement of the fabric elongation elasticity (N.m )
–Weft and Warp elasticity: elasticity along the Weft and Warp directions
–Shear elasticity: elasticity for a shearing deformation between weft and warp directions
Bending elasticity: measurement of the fabric bending elasticity (N.m)
–Weft and Warp bending: bending along the Weft and Warp directions
Viscosity parameters: defined for each elastic parameter
-2
Density: mass per surface unit of the fabric (Kg.m )
-2
-2
2
Gravity: nominal acceleration of objects left at rest (9.81 m.s > ~10 m.s )
Aerodynamic viscosity: aerodynamic force exerted on a fabric per surface unit and per velocity unit
between the fabric speed and the air speed:
-3
-3
–Normal (Flowing: N.m .s) and Isotropic (Damping: N.m .s) components relative to the orientation of
the fabric surface
2.4
Web Interface and User Interactivity
There is a considerable number of 3D technologies that can be used for the Virtual
Fitting Room implementation. Of all the available technologies, VRML (Virtual
Reality Modeling Language) [21] is the most established and widely used (ISO
standard). Its latest version, VRML97 supports animation and advanced interaction.
We used Shout3D [22] as a 3D player/viewer that is implemented using Java taking
as input VRML exports from the cloth simulator.
The integrated solution is a cross-platform application/viewer accessible by nearly
all web browsers. Such application/viewers provide extensibility via APIs, which
means that additional desired features can be programmatically added in them.
Shout3D is, at bottom, a Java class library. The methods of that library constitute the
Shout3D Application Programming Interface (API). We went further and extended
this class library to create a new class with new methods. The primary means of
utilizing the API is through direct Java programming. We extended the basic
Shout3D Applet and Panel classes to create a new class that implement interactivity
using the methods in the API. The methods that extend the basic class library aim to
control mainly the user interaction with the 3D viewer. That involves input when the
user requires changing the cloth on the animated body, body size, background
images and scene rotation. This approach is suitable for commercial-quality work
that will run reliably on all platforms. It is also possible to use JavaScript to call the
methods in the API. The latter approach, while convenient for testing ideas, will not
generally be satisfactory for commercial work and will not be functional on all
platforms.
The Virtual Try On is composed mainly of two web pages, where one contains the
3D viewer and the other the cloth catalogue. The 3D viewer (MIRALab Generic
Viewer) after initialization loads the default 3D scene where the dressed models will
appear. The generic viewer is an extended version of the basic Shout3D applet. Most
interactivity programming is implemented in an extended Panel class. The extended
class can implement the DeviceObserver interface to handle mouse and keyboard
events from the user. These events are handled in a method named onDeviceInput().
The extended Shout3DPanel class can also make use of the render loop through the
RenderObserver interface. This interface consists of the onPreRender() and
onPostRender() methods, which are automatically called immediately before or
after each render cycle. Initialization duties, such as obtaining references to objects
in the scene, are handled by overriding the customInitialize() method of the
Shout3DPanel class.
The default scene loaded initially is composed of an empty VRML Transform Node
namely “emptymodel” that will act as the container where the dressed bodies will
load. To implement such interactivity we control the values of class objects that
represent nodes in the 3D scene graph. For example, a visitor can manipulate a
geometric model only if the panel class has access to the Transform node controlling
that geometry. The panel class must therefore create a Transform class object
reference variable, and must obtain a reference to place in that variable. Once the
reference is obtained, the fields of the Transform node can be changed by means of
the reference. The primary means of obtaining such a reference is the
change_model() extended method of the miralab_generic class. This method takes
the DEF name “model” of the desired node in the VRML scene file as an argument,
and returns a reference to the Java class object. With most (but not all) browsers,
JavaScript in an HTML page can be used to call methods in the Shout3D API and
therefore implement interactivity and procedural animation without writing and
compiling extended Shout3DApplet and Shout3DPanel classes. In order to obtain
references to objects in the scene, we used the standard change_model() method of
the miralab_generic class. This method necessarily requires a reference to the
miralab_generic object.
Apart from the 3D viewer an important part of the Virtual Try On is the catalogue of
the different clothes. We use one of the new features introduced in Microsoft®
Internet Explorer 5.5, that is Dynamic HTML (DHTML) behaviors. DHTML
behaviors are components that encapsulate specific functionality or behavior on a
page. When applied to a standard HTML element on a page, a behavior enhances
that element's default behavior. As encapsulated components, behaviors provide
easy separation of script from content. This not only makes it easy to reuse code
across multiple pages, but also contributes to the improved manageability of the
page. The MPC behavior is used to add both the multipage container and the
individual page tabs. We use the MPC behavior to create two clothes catalogues
(male & female) as shown in the figure 7.
Individual page tabs implemented
with DHTML help significantly to
download a lightweight web page.
Figure 7 –The online cloth catalogue implemented with DHTML MPC behavior
3. Case Study – MIRALab’s Virtual Try On
3.1
Preparation of generic bodies
The generic model is composed of an H-Anim LoA 2 [23] skeleton hierarchy and a
skin mesh. The skin surface is a combination of a Poser character and body models
created at the University of Geneva. A proper skin attachment was essential for the
skeletal deformation. The attachment is considered as assigning for each vertex of
the mesh its affecting bones and corresponding weights. To say that a vertex is
"weighted" with respect to a bone means that the vertex will move as the bone is
rotated in order to stay aligned with it. At 100 percent weighting, for instance, the
vertex follows the bone rigidly. This method combines for each vertex the
transformation matrix of the bones in accordance to their weight. Once the skin is
properly attached to the skeleton, transformation of the bone automatically derives
transformation of the skin mesh [24].
To animate the virtual bodies we used Optical Motion Capture (VICON system at
the University of Geneva). To obtain a realistic animation sequence, we hired two
professional fashion models (Guido Frauenrath & Sarah Stinson) to simulate a
catwalk (Figure 6). The captured animation from the models was applied to the HAnim skeleton to obtain the animated generic body.
Figure 8 – Motion Tracking with VICON(left) and H-ANIM skeleton (right)
3.2
Cloth creation and simulation
Garments are designed using the traditional 2D pattern approach. We use the
Modaris software [25] package to create the patterns. The functionality of the
software allows us to simulate the approach of a traditional cloth designer (Figure 9).
Figure 9 – 2D patterns creation import in CAD software
The patterns then are discretized into a triangular mesh. Following that we place the
planar patterns automatically around the body with the use of reference lines (Figure
10). These 2D patterns are constructed and imported in the MIRACloth garment
simulator.
Figure 10 – 2D patterns creation and positioning around generic body
We simulated three different types of clothing for the female bodies (skirt, pullover
and dress) each one applied with different fabric properties.
•
The skirt and the pullover was simulated with the cotton properties (Section 2.3)
and
•
The dress with the tencel properties (Section 2.3)
(For the male bodies we applied the same methodology)
Since the final models must be optimized for usage on the Internet, each garment
was composed of only 1000 polygons. A second step was performed to decrease the
weight of the textures, in order not to overload the video memory during the real
time visualization on the web, keeping sizes to 512x512 pixels. The final output was
exported in VRML. 3DSMax like all 3D animation packages, offers spline
interpolation. To approximate the smooth curvature and subtle speed changes of
spline interpolated function curves, the 3Dsmax VRML exporter provides a
sampling mechanism for linear key frame animation. The greater the number of
frame samples, the closer the linear interpolated result approximates the spline
interpolated original.
3.3
Web Interface
The outcome of the overall methodology is an approach to online visualization and
immersion that lets any standard web browser display interactive 3D dressed bodies
without the need for extra plugins or downloads. For the purposes of wide public
Internet access, the first VRML97 output directly from the garment simulator
(MIRACloth - 3DSMax plugin) is used. Therefore, it is possible to view it locally or
across the network without any special dedicated viewer. This allows a large-scale
diffusion of the content over the W.W.W without increasing the cost of distribution.
The models performed satisfactory in Internet Explorer with a rendering
performance of 20 to 35 frames per second. A snapshot is shown in Figure 11.
Figure 11 – The Virtual Try On: The online cloth catalogue (left)
and the 3D viewer/extended java class (right)
The models exported from the 3D Garment Simulator described before were
integrated in the Virtual Try On suite to offer a Virtual Fitting Room service. The
major considerations that determined the quality of the Virtual Fitting Room
application were:
•
•
•
•
The high quality rendering realism: (The geometry of the object was correctly
perceived to include all its desired features). Additionally, antialiasing
techniques, smooth shading and texture mapping were the most important
factors that improved the rendering quality.
The high quality of the animation, depending largely on the number of frames
that can be redrawn on the display during a unit of time. (We keep in mind that
while correct motion perception starts at 5 frames per second, good animation
contains at least 15 frames per second).
The interactive possibilities, these are related to how the user can move around
the scene and visualize the objects through modification of viewing parameters
(spins, body sizes, change transforms etc.)
The response time that should be minimal. (The user is not willing to spend
more than a few minutes in the Virtual Fitting Room). The whole process of
dressing the customer and displaying the animations was fast enough.
4. Conclusions
With respect to Macy’s fashion show, the high-end interactivity that they introduced
was unparalleled till now. They have included virtual humans and several cloth
catalogues, so that the viewer’s sense of participation and interaction is increased.
We have introduced a new methodology to create step by step a Virtual Try On,
starting from the creation of standard bodies, animating them, dressing them and
then creating an interface to make them available on a Virtual Try On on the
Internet.
Based on realistic properties the behavior of our cloth simulation is highly realistic
and this is visibly evident when viewing our VTO and comparing our results with
previous work. We have succeeded to introduce a complete chain of processes for
making bodies and clothes optimized for Internet usage in a Virtual Fitting Room.
The bodies created with our approach correspond to real human body shapes as the
human body measurements were derived from anthropometric sizing surveys and
were modeled appropriately to be entirely animatable. The approach to design the
clothes was merged with the fashion house approach to address the requirements of
a cloth designer. Furthermore we simulated the physical parameters of the garments
aiming to give to the user the feeling of the cloth behavior in order to be as close as
possible to reality.
5. Future work
The next goal, for several researchers in MIRALab, is to dynamically change the
body measurements of a virtual human in order to move from the existing dressed
bodies according to sizes to dressed bodies according to individual measurements.
We have already decided to use the standard open specifications and we have opted
for the H-Anim format that is part of the MPEG4 and X3D specification. The main
issue is the implementation of realistic deformations, as H-Anim bodies are
segmented, as well as animation file format. We are currently working on an inhouse solution in the form of an API that allows for virtual human real-time
deformation, animation and integration with the 3D environment.
6. Acknowledgements
This work is supported by the IST European project ‘E-TAILOR’. We like to thank
Pascal Volino and Frederic Cordier for the development of the MIRACloth software,
Tom Molet for the development of the motion tracking software, and our partners in
E-TAILOR for the business analysis of the market. We like to thank Sabine
Schechinger for her contribution in the making of the male, body and cloth,
collection. Special thanks are due to Chris Joslin, Pritweesh De and George
Papagiannakis for proof reading this document.
References
[1] Forrester Research, Apparel's On-line Makeover, Report, May 1999,
http://www.forrester.com/ER/Research/Report/0,1338,5993,00.html
[2] Brian Beck, Key Strategic Issues in Online Apparel Retailing, yourfit.com, 2000,
Version 1.0
[3] Welcome to My Virtual Model(TM), http://www.landsend.com/
[4] NORDSTROM, http://www.nordstrom.com
[5] Macy's Passport 99 Fashion Show
http://www.shoutinteractive.com/Fashion/index.html
[6] Volino P., Magnenat-Thalmann N., Implementing Fast Cloth Simulation with
Collision Response, Computer Graphics International 2000, June 2000.
[7] Weil J., "The Synthesis of Cloth Objects", Computer Graphics (SIGGRAPH’86
proceedings), Addison-Wesley, 24, pp 243-252, 1986.
[8] Yang Y., Magnenat-Thalmann N., "Techniques for Cloth Animation", New
trends in Animation and Visualisation, John Wiley & Sons Ltd, pp 243-256, 1991.
[9] Carignan M., Yang Y., Magnenat- Thalmann N., Thalmann D.,"Dressing
Animated Synthetic Actors with Complex Deformable Clothes", Computer Graphics
(SIGGRAPH’92 proceedings), Addison-Wesley, 26(2), pp 99-104, 1992.
[10] Volino P., Courchesne M., Magnenat-Thalmann N., "Versatile and Efficient
Techniques for Simulating Cloth and Other Deformable Objects", Computer
Graphics (SIGGRAPH’95 proceedings), Addison-Wesley, pp 137-144, 1995.
[11] Breen D.E., House D.H., Wozny M.J., "Predicting the Drap of Woven Cloth
Using Interacting Particles", Computer Graphics (SIGGRAPH’94 proceedings),
Addison-Wesley, pp 365-372, July 1994.
[12] Eberhardt B., Weber A., Strasser W., "A Fast, Flexible, Particle-System Model
for Cloth Draping", Computer Graphics in Textiles and Apparel (IEEE Computer
Graphics and Applications), pp 52-59, Sept. 1996.
[13] Eischen J.W., Deng S., Clapp T.G., "Finite-Element Modeling and Control of
Flexible Fabric Parts", Computer Graphics in Textiles and Apparel (IEEE
Computer Graphics and Applications), pp 71-80, Sept. 1996.
[14] HOHENSTEIN (DE), Workpackage 7 Input, Project No: IST-1999-10549,
Internal Project Consortium document, July 2001
[15] Molet, T., Aubel, A., Çapin, T. et al (1999), ANYONE FOR TENNIS?,
Presence, Vol. 8, No. 2, MIT press, April 1999, pp.140-156.
[16] Oxford Metrics (2000) Real-Time Vicon8 Rt. Retrieved 10-04-01 from:
http://www.metrics.co.uk/animation/realtime/realtimeframe.htm
[17] MotionAnalysis, (2000) Real-Time HiRES 3D Motion Capture System.
Retrieved 17-04-01 from:
http://www.motionanalysis.com/applications/movement/research/real3d.html
[18] Bodenheimer B., Rose, C., Rosenthal, S., & Pella, J. (1997). The Process of
Motion Capture: Dealing with the Data. Eurographics workshop on Computer
Animation and Simulation’97, Springer-Verlag Wien, 3-18.
[19] Volino P., Magnenat-Thalmann N., Comparing Efficiency of Integration
Methods for Cloth Simulation, Proceedings of CGI'01, Hong-Kong, July 2001.
[20] Volino P., Magnenat-Thalmann N., Virtual Clothing Theory and Practice, ISBN
3-540-67600-7, Springer-Verlag, Berlin Heidelberg, New York
[21] VRML97, The VRML97 The Virtual Reality Modeling Language International
Standard ISO/IEC 14772-1:1997, http://www.vrml.org/Specifications/VRML97/
[22] Polevoi R., Interactive Web Graphics with Shout3D, ISBN 0-7821-2860-2,
Copyright 2001 SYBEX Inc. Alameda
[23] Babski C., Thalmann D., A seamless Shape For HANIM Compliant Bodies
[24] Seo H., Cordier F., Philippon L., Magnenat-Thalmann N., Interactive Modelling
of MPEG-4 Deformable Human Body Models, Postproceedings Deform 2000,
Kluwer Academic Publishers. pp. 120~131.
[25] LECTRA SYSTEMS – MODARIS V4
http://www.lectra.com/dyna/produits/modarisv4gb3ac477f005907.pdf