KR101803064B1 - Apparatus and method for 3d model reconstruction - Google Patents

Apparatus and method for 3d model reconstruction Download PDF

Info

Publication number
KR101803064B1
KR101803064B1 KR1020150049732A KR20150049732A KR101803064B1 KR 101803064 B1 KR101803064 B1 KR 101803064B1 KR 1020150049732 A KR1020150049732 A KR 1020150049732A KR 20150049732 A KR20150049732 A KR 20150049732A KR 101803064 B1 KR101803064 B1 KR 101803064B1
Authority
KR
South Korea
Prior art keywords
information
dimensional
restoration
depth
geometry information
Prior art date
Application number
KR1020150049732A
Other languages
Korean (ko)
Other versions
KR20160120536A (en
Inventor
김태준
김호원
손성열
김기남
박혜선
조규성
박창준
최진성
Original Assignee
한국전자통신연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자통신연구원 filed Critical 한국전자통신연구원
Priority to KR1020150049732A priority Critical patent/KR101803064B1/en
Publication of KR20160120536A publication Critical patent/KR20160120536A/en
Application granted granted Critical
Publication of KR101803064B1 publication Critical patent/KR101803064B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Computer Vision & Pattern Recognition (AREA)

Abstract

The present invention relates to an automated restoration apparatus and method for restoring a three-dimensional object collapsed on a cradle formed of a transparent material into a three-dimensional model.
A three-dimensional model reconstruction method according to an embodiment of the present invention includes the steps of acquiring a depth image and a color image for a restoration object placed on a transparent restraint, restoring the three-dimensional geometry information of the restoration object using the obtained depth image, Dimensional geometry information of the object to be reconstructed by using the three-dimensional geometry information of the transparent cradle, and a step of extracting the three-dimensional geometry information from which the distortion depth information is removed and the texture generated from the color image And generating a final three-dimensional model using the three-dimensional model.

Description

[0001] APPARATUS AND METHOD FOR 3D MODEL RECONSTRUCTION [0002]

The present invention relates to an automated restoration apparatus and method for restoring a three-dimensional object collapsed on a cradle formed of a transparent material into a three-dimensional model.

As sensors capable of measuring depth information have been developed and popularized, various types of objects can be restored and digitized.

These technologies are used in various fields such as visualization and simulation.

The virtualization and on-line fitting methods of clothes according to the related art typically use a chroma-key method. However, when the colors of the clothes are similar to the colors of the mannequins with the clothes, the method can not be used There is a problem.

In addition, according to the image information-based method according to the related art, there is a problem that a change in color is caused in a virtualization process of a garment in the case of a thin or lighted garment.

Disclosure of Invention Technical Problem [8] The present invention has been proposed in order to solve the above-mentioned problems, and it is an object of the present invention to restore a three-dimensional outer shape of a garment to be restored without changing the color of clothes, Dimensional model restoring apparatus and method capable of restoring a three-dimensional model of clothes quickly and easily at low cost by removing color information of a transparent mannequin extracted from a sensor.

A three-dimensional model reconstruction method according to an embodiment of the present invention includes the steps of acquiring a depth image and a color image for a restoration object placed on a transparent restraint, restoring the three-dimensional geometry information of the restoration object using the obtained depth image, Dimensional geometry information of the object to be reconstructed by using the three-dimensional geometry information of the transparent cradle, and a step of extracting the three-dimensional geometry information from which the distortion depth information is removed and the texture generated from the color image And generating a final three-dimensional model using the three-dimensional model.

According to another aspect of the present invention, there is provided an apparatus for reconstructing a three-dimensional model, the apparatus comprising: an image acquisition unit for acquiring a depth image and a color image for a restoration object placed on a transparent restraint; Dimensional geometry information of the object to be reconstructed using the 3D geometry information of the transparent cradle and the 3D geometry information and the color image of the object to be reconstructed, And a model generating unit for generating a three-dimensional clothing model using the model information.

The three-dimensional model restoration apparatus and method according to the present invention are advantageous in that the color of the restoration object is not affected by the chroma key color as compared with the restoration method using the chroma key according to the related art, There is an advantage that it is free from restriction that the color of the target object should not be similar.

In addition, according to the present invention, it is possible to restore a three-dimensional model quickly and easily by an automated method while minimizing the user's intervention in the process of restoring the three-dimensional garment model, have.

The effects of the present invention are not limited to those mentioned above, and other effects not mentioned can be clearly understood by those skilled in the art from the following description.

1 is a flowchart illustrating a three-dimensional model restoration method according to an embodiment of the present invention.
2 is a diagram illustrating depth information and color information acquisition using a transparent mannequin according to an embodiment of the present invention.
Fig. 3 is a conceptual diagram showing depth information distortion due to refraction occurring when passing through a transparent mannequin.
4 is a block diagram illustrating a 3D model restoration apparatus according to an embodiment of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS The above and other objects, advantages and features of the present invention and methods of achieving them will be apparent from the following detailed description of embodiments thereof taken in conjunction with the accompanying drawings.

The present invention may, however, be embodied in many different forms and should not be construed as being limited to the exemplary embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, And advantages of the present invention are defined by the description of the claims.

It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. In the present specification, the singular form includes plural forms unless otherwise specified in the specification. &Quot; comprises "and / or" comprising ", as used herein, unless the recited component, step, operation, and / Or added.

Prior to describing the preferred embodiments of the present invention, the background of the present invention will be described below in order to facilitate the understanding of those skilled in the art.

Various types of objects have been restored and digitized due to the development of the depth measurement sensor. In particular, the availability of depth restoration information can be acquired at a low cost due to the spread of kinect.

Using KinectFusion of Microsoft according to the prior art, it is possible to relatively easily restore a space of about one space in a room to a three-dimensional image.

However, according to the related art, it is inconvenient to manually scan a space, and when an unnecessary portion is restored, there is a problem that it must be manually removed manually.

With the emergence of virtualization and online fitting techniques of clothing, users can easily predict how to wear clothes without wearing clothing directly through experience services.

However, the biggest problem in the service field is that it is difficult to supply virtual clothing continuously.

There are three methods for creating virtual clothing: using computer graphics authoring tools (eg, Autodesk Maya), constructing 3D models using the actual costume garments, and using image information from multiple angles And restoring the dimensional model.

The method of using the computer graphics authoring tool or the method of constructing the three-dimensional model by using the clothes of the actually sold costume is a production method in which a highly trained designer has to spend a lot of time, There is a problem that it is difficult.

The method of restoring the three-dimensional model using the image information photographed at various angles can extract the natural appearance information of the clothes quickly at low cost. However, since the mannequin resting on the clothes is also extracted at the same time, There is a problem that it is not suitable for a large-scale restoration process.

As a method for removing unnecessary mannequins, a chroma-key method, which is a typical technique used in the field of image processing and synthesis according to the prior art, has been used.

The chroma key is a screen synthesis technique that synthesizes the subject on another screen using the color difference. It is a technique widely used in programs such as weather forecast, election broadcasting, and history special.

Such a chroma key follows the principle of separating the subject and background to be extracted by using the RGB signal obtained from the camera of the color television with the difference of the chroma as a key and composing it to another screen.

For example, if the subject you are trying to extract is a person, the person is placed in front of a blue background, which is fleshy and complementary, and photographed with a camera. When the blue component is removed from the output, the background becomes black and only the character can be extracted.

In the production of virtual clothing, a method similar to a chroma key is used to create a specific color for an object or area that is not an object, recognize a designated color at the time of image processing, and remove the object from the object. It is possible.

However, this technique can not be used when the color of a garment is similar to that of a mannequin, and thin or light garment has many problems such as a change in color in an image information based method.

The present invention has been proposed in order to solve the above-mentioned problems in the prior art, and a preferred embodiment of the present invention will be described below with reference to FIGS. 1 to 3. FIG.

1 is a flowchart illustrating a three-dimensional model restoration method according to an embodiment of the present invention.

The 3D model reconstruction method according to an embodiment of the present invention includes the steps of acquiring a depth image and a color image with respect to a restoration object placed on a transparent restraint and detecting the 3D geometry information of the restoration object using the obtained depth image Dimensional geometry information of the object to be reconstructed by using the three-dimensional geometry information of the transparent holder, and generating the three-dimensional geometry information and the color image from which the distortion depth information is removed And generating a final three-dimensional model using the texture.

In a preferred embodiment of the present invention described below, a process of restoring a three-dimensional model of clothing by loading clothing as a restoration object on a mannequin 300 formed of a transparent material will be described with reference to FIG. 2 .

In a preferred embodiment of the present invention, the garment to be restored is a dress including a bottom, a dress or a two-piece including an upper body including a jumper, a jacket, a coat, a knit, a shirt or a T- Skis, etc., or accessories including hats, ties, mufflers, bags or shoes.

In addition, the mannequins may be of the upper half body type, the lower half body type, the full body type, the head exclusive type, the hand exclusive type or the exclusive type, and the sizes of the commonly used fixed type mannequins or main body parts such as the head circumference, It can be a variable manikin with a physical size adjustable by programs such as armpit, arm circumference, wrist circumference, thigh circumference, calf circumference, foot circumference, and the like.

While the preferred embodiments of the present invention presented to assist those of ordinary skill in the art are intended to restore garments, the present invention is also applicable to restoration of other types of objects using cradles, It is possible to easily remove the cradle made of a transparent material from the restoration subject without any additional post-operation.

According to the embodiment of the present invention, the clothing is placed on the transparent mannequin (S100), and the depth image camera 100 and the color image camera 200 of the image obtaining unit 10 are used And acquires a depth image and a color image (S200, S300).

According to an embodiment of the present invention, a mannequin 300 formed of a transparent material as shown in Fig. 2 is used to automatically remove a mannequin for mounting clothes in the generation of a clothing model, and a depth image camera 100 And a color image camera 200 to acquire depth and color information for generating a three-dimensional clothing model.

An active scanner for acquiring depth information irradiates a pattern or a laser on a surface of a target object, reads a pattern reflected on the target object or a laser from the camera, and finds the three-dimensional position of the object using a triangulation method.

Patterns generated in a pattern or laser passing through a transparent mannequin as in the embodiment of the present invention are classified as follows.

The first aspect is that the irradiated pattern or laser passes through the transparent mannequin 300 and acquires the depth information of the clothing existing beyond the transparent mannequin 300, such as the acquisition of the depth image camera 100a shown in FIG. .

In this case, it is possible to automatically acquire the depth information of the garment from which the mannequin is removed, taking into account the characteristics of the depth image camera.

The second aspect is that the pattern or laser passes through the transparent mannequin 300 and loses depth information of the clothes due to the reduction of intensity or deformation of the pattern.

In this case, it is possible to substitute the depth information obtained by using another sensor not covered by the transparent mannequin 300, like the depth image camera 100b shown in FIG.

The third aspect is the case where the irradiated pattern or laser passes through the transparent mannequin 300a and distorted depth information of the garment is obtained by refraction as shown in Fig.

In this case, the obtained depth information is excluded in the texture generation for generating the 3D clothing model through the distortion depth information removal process of the restoration unit 20 in step S600 described later.

The fourth aspect is the case where the irradiated pattern or laser is reflected by the transparent manikin 300 to obtain depth information of the transparent manikin 300.

This case is caused by a high reflectance and a low transmittance of the transparent mannequin, and can be solved by replacing the transparent mannequin with a material having a low reflectance and a high transmittance.

Unlike the depth image camera 100, the color image camera 200 can acquire color information of the transparent mannequin 300.

Also, the color information distorted by the refraction of the transparent mannequin is different from the distorted depth information described above, and therefore, a method different from the distortion depth information removal method should be used.

In the texture generation step S700 according to the embodiment of the present invention, the obtained mannequin color is processed using the three-dimensional geometric information of the transparent mannequin that is prepared in step S200.

In step S500, the three-dimensional geometric information of the garment is restored using the depth image obtained in step S300.

The captured depth information is converted into 3D point coordinates using internal parameters such as camera position and direction, external parameters such as the camera's lens information, and the like.

At this time, the 3D point coordinates are obtained by photographing at least one camera in one or more directions at a position more than one point.

Generating a 3D model using 3D point coordinates is performed by applying a general mesh reconstruction technique or the like.

In addition, it is possible to use a voxel-based reconstruction method capable of enhancing the reconstruction speed and adding 3D point coordinates at the same time of reconstruction. In this case, the voxel-based reconstruction method includes a marching cube method, distance field) may be used.

In restoring the three-dimensional geometric information of the garment, as shown in FIG. 3, the refracted information is acquired due to the transparent material constituting the transparent mannequin 300a, thereby generating depth information in an unintended space Lt; / RTI >

The distorted depth information is removed using the 3D geometry information or 3D geometry connection information of the transparent mannequin obtained in step S200.

As shown in Fig. 3, the depth information of the clothes 401 is refracted while passing through the transparent mannequin, and reaches the depth image camera 100. Fig.

As shown in the distortion garment 402, the depth image camera 100 interprets the obtained depth information as distorted information, so that the 3D geometry information restored in step S500 includes distorted geometry information do.

If the distorted geometry information is not geometrically connected to the normal geometry information, it is possible to remove the distorted geometry information using the 3D geometry connection information.

In step S600, the connection information of the two geometry information is determined as follows.

If the 3D model is a mesh composed of a vertex having a positional information and a polygon connected to a vertex, it is necessary to first construct a set of vertices of one polygon, and at least one of the vertices of the next polygon, Adds the current polygon to the current set if its position is the same as the vertex or if the distance is very close.

This process is performed for all polygons. If different vertices of one polygon are included in different sets, they are combined into one set.

As described above, if the distorted geometric information is geometrically connected to the normal geometric information, the distorted geometric information and the normal geometric information are distinguished using the 3D geometric information of the transparent mannequin, which is prepared in step S200.

That is, a virtual ray is irradiated from a depth image camera, the ray is refracted by using the 3D geometry information of the transparent mannequin, and the depth information corresponding to the ray is removed when the refraction degree exceeds a certain level compared to the traveling distance of the ray.

In operation S700, the model generating unit 30 generates the texture from which the mannequin information is removed from the color image using the reconstructed three-dimensional geometry information and the three-dimensional geometry information of the mannequin.

In operation S700, the reconstructed 3D model is two-dimensionally parameterized to obtain texture coordinates of each vertex.

The 3D model is then rendered in 2D space to follow the parameters to find the 3D space mapped to each texel of the texture.

To determine the color of each texel, a visibility test is performed with the location of the color image camera at the mapped 3D position.

As an embodiment of the visibility test, a method of creating a virtual ray passing through the position of each sensor from the 3D coordinates mapped to the texel, and passing the visibility test when the 3D model does not collide with the 3D model is applied.

In this case, the collision test uses a method of finding a point of intersection between a straight line and a plane passing through each polygon in the case of a model made of a polygon.

In order to speed up the visibility test, it is possible to create an auxiliary data structure for acceleration such as Kd-tree and BVH (Bounding Volume Hierarchy) in 3D model.

The texel passed in the visibility test uses the corresponding color information, and the texel value that is not passed in the visibility test is determined based on the color information obtained from the other sensor that passed the visibility test.

However, since the transparent mannequin 300 is not included in the 3D model as in the color image camera 200a shown in FIG. 2, the transparent mannequin 300 can pass the visibility test with the color image camera 200a of FIG.

In order to prevent this, the 3D geometry information of the transparent mannequin obtained in step S200 is included in the visibility test object so that the color of the transparent mannequin 300 is not acquired.

In step S800, a final three-dimensional garment model is generated using the texture generated in step S700 and the three-dimensional geometry information of the restored garment (three-dimensional geometry information in which the distortion depth information is removed in step S600).

The embodiments of the present invention have been described above. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the disclosed embodiments should be considered in an illustrative rather than a restrictive sense. The scope of the present invention is defined by the appended claims rather than by the foregoing description, and all differences within the scope of equivalents thereof should be construed as being included in the present invention.

10: image acquiring unit 20:
30: a model generating unit 100: a depth image camera
200: Color video camera 300: Transparent mannequin
401: Clothes 402: Distorted Clothes

Claims (11)

(a) acquiring a depth image and a color image for a restoration object colliding with a transparent restraint;
(b) restoring the 3D geometry information of the restoration object using the obtained depth image;
(c) removing three-dimensional geometric information distortion depth information of the restoration target object using the three-dimensional geometry information of the transparent holder; And
(d) generating a final three-dimensional model using the texture generated from the three-dimensional geometry information and the color image from which the distortion depth information is removed,
Wherein the step (a) acquires the depth image and the color image using a plurality of cameras whose photographing positions are different from each other on the basis of the transparent holder, The depth information of the object to be restored is lost due to the pattern deformation or the intensity decrease as the object passes through the transparent holder, the depth information obtained from the second camera disposed at a position where the object is not covered by the transparent holder Alternate acquisition of
Dimensional model reconstruction method.
delete The method according to claim 1,
In the step (b), depth information in the depth image is converted into three-dimensional point coordinates using the camera position and direction information, and restored as three-dimensional geometry information of the restoration target object through mesh restoration or voxel- To do
Dimensional model restoration method.
The method according to claim 1,
In the step (c), the light beam irradiated from the camera for acquiring the depth image is refracted using the three-dimensional geometry information of the transparent holder, and when the refraction degree exceeds the set ratio of the progress distance, To remove depth information obtained from
Dimensional model restoration method.
The method according to claim 1,
In the step (d), the texture coordinates of the vertex are extracted by performing two-dimensional parameterization of the 3D geometry information of the restoration object, and the visibility test is performed with respect to the position of the camera for acquiring the color image, To do
Dimensional model restoration method.
6. The method of claim 5,
In step (d), a virtual ray connecting the camera for acquiring the color image from the three-dimensional coordinate mapped to the texel is generated, and it is determined whether the ray collides with the three-dimensional geometry information of the object to be restored Performing the visibility test
Dimensional model restoration method.
6. The method of claim 5,
The step (d) includes the step of including the three-dimensional geometry information of the transparent holder in the visibility test object to exclude the color information of the transparent holder in color determination of the texel
Dimensional model restoration method.
An image acquiring unit acquiring a depth image and a color image of a restoration object collided on a transparent restraint;
A restoration unit for restoring the 3D geometry information of the restoration object using the depth image; And
Dimensional geometry information of the restoration target object and the three-dimensional geometry information of the transparent restraint to generate a texture from which the information of the transparent holder is removed from the color image, And a model generating unit for generating a final three-dimensional model using the three-
The model generation unit extracts the texture coordinates of the vertex using the 3D geometry information of the restoration object, and determines the texel color considering the result of the visibility test
Dimensional model reconstruction apparatus.
9. The method of claim 8,
The restoration unit may remove the distortion depth information in the three-dimensional geometry information of the restoration object using the three-dimensional geometry information of the transparent holder
3D model restoration device.
delete (a) acquiring a depth image and a color image for a restoration object colliding with a transparent restraint;
(b) restoring the 3D geometry information of the restoration object using the obtained depth image;
(c) removing three-dimensional geometric information distortion depth information of the restoration target object using the three-dimensional geometry information of the transparent holder; And
(d) generating a final three-dimensional model using the texture generated from the three-dimensional geometry information and the color image from which the distortion depth information is removed,
In the step (c), the light beam irradiated from the camera for acquiring the depth image is refracted using the three-dimensional geometry information of the transparent holder, and when the refraction degree exceeds the set ratio of the progress distance, To remove depth information obtained from
Dimensional model restoration method.
KR1020150049732A 2015-04-08 2015-04-08 Apparatus and method for 3d model reconstruction KR101803064B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150049732A KR101803064B1 (en) 2015-04-08 2015-04-08 Apparatus and method for 3d model reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150049732A KR101803064B1 (en) 2015-04-08 2015-04-08 Apparatus and method for 3d model reconstruction

Publications (2)

Publication Number Publication Date
KR20160120536A KR20160120536A (en) 2016-10-18
KR101803064B1 true KR101803064B1 (en) 2017-11-29

Family

ID=57244287

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150049732A KR101803064B1 (en) 2015-04-08 2015-04-08 Apparatus and method for 3d model reconstruction

Country Status (1)

Country Link
KR (1) KR101803064B1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070611B (en) * 2019-04-22 2020-12-01 清华大学 Face three-dimensional reconstruction method and device based on depth image fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001084408A (en) * 1999-09-13 2001-03-30 Sanyo Electric Co Ltd Method and device for processing three-dimensional data and recording medium
JP2008015863A (en) 2006-07-07 2008-01-24 Nippon Hoso Kyokai <Nhk> Distance information output device and three-dimensional shape restoring device
US20080262944A1 (en) 2007-04-18 2008-10-23 Wu Chih-Chen Online clothing display system and method therefor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001084408A (en) * 1999-09-13 2001-03-30 Sanyo Electric Co Ltd Method and device for processing three-dimensional data and recording medium
JP2008015863A (en) 2006-07-07 2008-01-24 Nippon Hoso Kyokai <Nhk> Distance information output device and three-dimensional shape restoring device
US20080262944A1 (en) 2007-04-18 2008-10-23 Wu Chih-Chen Online clothing display system and method therefor

Also Published As

Publication number Publication date
KR20160120536A (en) 2016-10-18

Similar Documents

Publication Publication Date Title
KR101778833B1 (en) Apparatus and method of 3d clothes model reconstruction
EP2686834B1 (en) Improved virtual try on simulation service
US11961200B2 (en) Method and computer program product for producing 3 dimensional model data of a garment
CN106373178B (en) Apparatus and method for generating artificial image
KR101707707B1 (en) Method for fiiting virtual items using human body model and system for providing fitting service of virtual items
US20100328308A1 (en) Three Dimensional Mesh Modeling
CN110168608B (en) System for acquiring 3-dimensional digital representations of physical objects
KR20180069786A (en) Method and system for generating an image file of a 3D garment model for a 3D body model
US20150317813A1 (en) User interface and methods to adapt images for approximating torso dimensions to simulate the appearance of various states of dress
CN110148217A (en) A kind of real-time three-dimensional method for reconstructing, device and equipment
EP2647305A1 (en) Method for virtually trying on footwear
Magnenat-Thalmann Modeling and simulating bodies and garments
JP2006249618A (en) Virtual try-on device
CN107194985A (en) A kind of three-dimensional visualization method and device towards large scene
CN114375463A (en) Method for estimating nude body shape from hidden scan of body
JP2016071645A (en) Object three-dimensional model restoration method, device, and program
JP5476471B2 (en) Representation of complex and / or deformable objects and virtual fitting of wearable objects
CN108846892A (en) The determination method and device of manikin
US20170193677A1 (en) Apparatus and method for reconstructing experience items
JP2004280776A (en) Method for determining shape of object in image
KR101803064B1 (en) Apparatus and method for 3d model reconstruction
JP2015212891A (en) Image processor and method for controlling image processor
Siegmund et al. Virtual Fitting Pipeline: Body Dimension Recognition, Cloth Modeling, and On-Body Simulation.
JPWO2019107150A1 (en) Detection device, processing device, attachment, detection method, and detection program
JP2000076456A (en) Three-dimensional shape data processor

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant