WO2017141344A1 - 3次元モデル生成システム、3次元モデル生成方法、及びプログラム - Google Patents
3次元モデル生成システム、3次元モデル生成方法、及びプログラム Download PDFInfo
- Publication number
- WO2017141344A1 WO2017141344A1 PCT/JP2016/054404 JP2016054404W WO2017141344A1 WO 2017141344 A1 WO2017141344 A1 WO 2017141344A1 JP 2016054404 W JP2016054404 W JP 2016054404W WO 2017141344 A1 WO2017141344 A1 WO 2017141344A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- dimensional model
- subject
- skeleton
- parameter
- shape
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
- G06T2207/20044—Skeletonization; Medial axis transform
Definitions
- the present invention relates to a three-dimensional model generation system, a three-dimensional model generation method, and a program.
- Non-Patent Document 1 discloses a three-dimensional model in which a subject is photographed from different directions with a plurality of RGBD cameras, and the body line of the subject is estimated based on the contour information and depth information of the subject acquired by each RGBD camera. Techniques for generating are described. Further, for example, Non-Patent Document 2 describes a technique for deforming a three-dimensional model so as to match the length of a limb indicated by depth information detected by a single RGBD camera.
- Non-Patent Document 1 it is necessary to prepare a large number of RGBD cameras, so a dedicated shooting studio or the like is required. Furthermore, since it is necessary to analyze an RGBD image obtained by photographing a subject from each photographing direction, the processing becomes complicated and it takes time to generate a three-dimensional model.
- the 3D model is simply deformed according to the length of a limb or the like, so the size of the subject cannot be accurately estimated. For example, the 3D model larger than the actual size of the subject Cannot be generated and a highly accurate three-dimensional model cannot be generated.
- the present invention has been made in view of the above problems, and an object thereof is to quickly generate a highly accurate three-dimensional model.
- a 3D model generation system is based on skeleton acquisition means for acquiring a skeleton parameter based on depth information of a subject detected by a depth detection device, on the basis of the skeleton parameter, Posture change means for changing the posture of the deformable three-dimensional model according to body parameters, surface shape acquisition means for acquiring the surface shape of the subject based on the depth information, and posture changed by the posture change means 3
- a body shape determining means for determining the body shape parameter based on the difference between the surface shape of the dimensional model and the surface shape of the subject; and the 3D model based on the body shape parameter determined by the body shape parameter determining means.
- a body shape deforming means for deforming.
- the three-dimensional model generation method includes a skeleton acquisition step of acquiring a skeleton parameter based on depth information of a subject detected by a depth detection device, and a three-dimensional shape that can be deformed by a body shape parameter based on the skeleton parameter.
- a posture changing step for changing the posture of the model; a surface shape step for acquiring a surface shape of the subject based on the depth information; a surface shape of the three-dimensional model whose posture is changed by the posture changing step; and the subject
- a body shape determining step for determining the body shape parameter based on the difference between the surface shape and a body shape deforming step for deforming the three-dimensional model based on the body shape parameter determined by the body shape parameter determining step. It is characterized by including.
- the program according to the present invention includes a skeleton acquisition unit that acquires skeleton parameters based on depth information of a subject detected by a depth detection device, and changes the posture of a three-dimensional model that can be deformed by body parameters based on the skeleton parameters.
- Posture changing means for performing, surface shape obtaining means for obtaining the surface shape of the subject based on the depth information, surface shape of the three-dimensional model whose posture has been changed by the posture changing means, and surface shape of the subject.
- the computer functions as body type determining means for determining the body type parameter, and body shape deforming means for deforming the three-dimensional model based on the body type parameter determined by the body type parameter determining unit.
- the information storage medium according to the present invention is a computer-readable information storage medium storing the above program.
- the 3D model generation system may further include a skeleton determination unit that determines the skeleton parameter based on a difference between a surface shape of the 3D model and a surface shape of the subject.
- the posture changing means updates the posture of the three-dimensional model based on the skeleton parameter determined by the skeleton determining means.
- the three-dimensional model generation system may be configured to detect a difference between a part of the surface shape of the three-dimensional model deformed by the body shape deforming unit and the surface shape of the subject corresponding to the part. Based on this, it further includes partial deformation means for deforming the part.
- the three-dimensional model generation system may be configured to detect a range corresponding to a detection range of the depth detection device from the three-dimensional model based on the depth information.
- the partial deformation means deforms a part specified by the detection range specifying means in the three-dimensional model.
- the partial deformation means deforms each part of the three-dimensional model based on a surrounding deformation state.
- the partial deformation means deforms each part of the three-dimensional model based on a past shape of the part.
- the three-dimensional model generation system may identify a body part corresponding to the body of the subject from the three-dimensional model based on the three-dimensional model or the surface shape of the subject.
- the three-dimensional model generation system may be configured to detect a detection range specifying unit that specifies a vertex corresponding to the detection range of the depth detection device from the three-dimensional model based on the depth information.
- the body type determining means determines the body type parameter based on a difference between the surface shape of the three-dimensional model and the surface shape of the subject in the detection range.
- the three-dimensional model generation system further includes a history recording unit that records a history of the body type parameter in a storage unit, and the body type determination unit includes the history recorded by the history recording unit.
- the body type parameter is determined based on the above.
- a highly accurate three-dimensional model can be quickly generated.
- FIG. 1 is a diagram illustrating how a subject is photographed in the three-dimensional model generation system
- FIG. 2 is a diagram illustrating a hardware configuration of the three-dimensional model generation system.
- the three-dimensional model generation system 1 includes a depth detection device 10 and a three-dimensional model generation device 20.
- the depth detection device 10 and the three-dimensional model generation device 20 are connected so as to be able to transmit and receive data by wired communication or wireless communication.
- the depth detection device 10 is a measuring device that detects the depth of a subject.
- the depth detection device 10 may be a Kinect (registered trademark) sensor.
- the depth is a distance between the depth detection device 10 and the subject.
- the subject is a moving object to be detected, for example, a person, an animal, or a clothing thereof.
- the depth detection device 10 includes a control unit 11, a storage unit 12, a communication unit 13, and a depth detection unit 14.
- the control unit 11 includes, for example, one or a plurality of microprocessors.
- the control unit 11 executes processing according to programs and data stored in the storage unit 12.
- the storage unit 12 includes a main storage unit and an auxiliary storage unit.
- the main storage unit is a volatile memory such as a RAM
- the auxiliary storage unit is a non-volatile memory such as a hard disk or a flash memory.
- the communication unit 13 includes a network card for wired communication or wireless communication. The communication unit 13 performs data communication via a network.
- the depth detection unit 14 is a depth sensor (distance camera) that detects a depth using electromagnetic waves, sound waves, or the like, and is, for example, an RGBD camera or an ultrasonic sensor.
- the depth detection unit 14 includes an RGBD camera and detects the depth of a subject using infrared rays will be described.
- the depth detection unit 14 itself can be realized by various known depth sensors.
- the RGBD camera of the depth detection unit 14 includes a CMOS image sensor or a CCD image sensor, and an infrared sensor.
- the depth detection unit 14 generates an RGB image of the subject based on a detection signal from the CMOS image sensor or the CCD image sensor.
- the RGB image is planar information of the subject and has no depth.
- the depth detection unit 14 emits infrared rays from the light emitting element of the infrared sensor, and detects the infrared rays reflected by the subject with the light receiving element.
- the depth detection part 14 estimates the distance with a to-be-photographed object based on the flight time after emitting infrared rays and returning, and produces
- various known methods can be applied to the depth information generation method itself, and a light coding method may be used in addition to the above-described time-of-flight method.
- FIG. 3 is a diagram showing an example of depth information.
- the depth information I is shown as a depth image.
- each pixel of the depth information I is associated with a pixel value indicating the depth.
- the depth information I may be handled as an image different from the RGB image, or may be handled as an RGBD image combined with the RGB image.
- the pixel value of each pixel includes the value of the depth channel in addition to the value of each color channel.
- the depth information I is treated as different from the RGB image.
- the depth detection apparatus 10 acquires a skeleton by estimating the position of each joint of the subject based on the depth information I detected by the depth detection unit 14.
- the skeleton can be acquired by various known extraction methods.
- a method using template matching will be described as an example.
- the depth detection apparatus 10 performs template matching between the depth information I detected by the depth detection unit 14 and a template image indicating the basic shape of the contour of the subject, and the position of each joint of the subject in the depth information I ( Pixel).
- the depth detection apparatus 10 estimates the three-dimensional position of each joint by carrying out predetermined matrix transformation of the depth of each joint.
- the estimated position of each joint is a skeleton posture, and is hereinafter expressed as detected posture information.
- FIG. 4 is a diagram illustrating an example of detected posture information.
- the detected posture information J includes the three-dimensional coordinates of each of the joints of the subject.
- the detected posture information J includes the head J 1 , the shoulder center J 2 , the right shoulder J 3 , the right elbow J 4 , the right hand J 5 , the left shoulder J 6 , the left elbow J 7 , the left hand J 8 , the spine J 9 , the hip
- the center J 10 the right hip J 11 , the right knee J 12 , the right foot J 13 , the left hip J 14 , the left knee J 15 , and the left foot J 16 are included.
- the detected posture information J may be more or less than the above 16 pieces.
- the detected posture information J only needs to indicate the position of the joint that can be detected from the depth information I, and may include, for example, the position of the wrist or ankle.
- the depth detection device 10 generates depth information I and detected posture information J based on a predetermined frame rate. For this reason, the depth detection device 10 periodically generates the depth information I and the detected posture information J. For example, assuming that the frame rate is 30 fps, the depth detection apparatus 10 generates depth information I and detected posture information J every 1/30 seconds. In the following, when the depth information I and the detected attitude information J are indicated at a certain time t, the respective codes are described as I t and J t .
- the three-dimensional model generation apparatus 20 is a computer that generates a three-dimensional model, and is, for example, a personal computer, a portable information terminal (including a tablet computer), a mobile phone (including a smartphone), or the like.
- the three-dimensional model generation apparatus 20 includes a control unit 21, a storage unit 22, a communication unit 23, an operation unit 24, and a display unit 25.
- the hardware configurations of the control unit 21, the storage unit 22, and the communication unit 23 are the same as those of the control unit 11, the storage unit 12, and the communication unit 13, respectively.
- the operation unit 24 is an input device for the player to operate, and is, for example, a pointing device such as a touch panel or a mouse, a keyboard, or the like.
- the operation unit 24 transmits the operation content by the player to the control unit 21.
- the display unit 25 is, for example, a liquid crystal display unit or an organic EL display unit.
- the display unit 25 displays a screen according to instructions from the control unit 21.
- each of the depth detection device 10 and the three-dimensional model generation device 20 may include a reading unit (for example, an optical disc drive or a memory card slot) that reads a computer-readable information storage medium.
- the program and data stored in the information storage medium may be supplied to the depth detection device 10 and the three-dimensional model generation device 20 via the reading unit.
- FIG. 5 is a diagram showing an outline of processing executed by the three-dimensional model generation system 1.
- T in FIG. 5 is a time axis.
- the depth detection device 10 periodically transmits depth information I and detected posture information J to the three-dimensional model generation device 20.
- the three-dimensional model generation apparatus 20 partially estimates the shape estimation process for estimating the body shape (body line) of the subject and the estimated body line.
- Geometric reconstruction processing for reconstructing a detailed shape (such as a wrinkle of clothes). The shape estimation process and the geometric reconstruction process are executed every frame.
- the template indicating the basic shape of the body is deformed to resemble the posture and body shape of the subject.
- the template is a three-dimensional model showing the body itself, and may be a naked three-dimensional model, for example.
- the template is composed of a plurality of parts indicating the body part, and may be prepared in advance using shooting data of a subject or 3D CAD.
- the symbol “M T ” is described in the three-dimensional model, and when referring to the three-dimensional model deformed by the shape estimation process, the symbol “M S ” is described.
- 3-dimensional model M S is, it can be said that the three-dimensional model of a naked that reproduces the posture and body type of the subject.
- the geometrical reconstruction process by using the depth information I, as detailed shape such as clothes wrinkles (shape of a portion other than the body) appears, partially reconstruct the three-dimensional model M S.
- the symbol “M D ” is described.
- Three-dimensional model M D not only the body line of the object is a three-dimensional model that reproduces to those wearing. Although details will be described later, the three-dimensional model M D, so can distinguish the body part and the other part of the subject, by feeding back the execution result of the geometrical reconstruction process on the shape estimation processing, three-dimensional it is also possible to improve the accuracy of the model M S.
- Bayesian estimation theory is used to evaluate the high accuracy of the three-dimensional models M S and M D so as to increase their accuracy.
- Bayesian estimation theory is a technique for estimating a thing to be estimated in a probabilistic sense based on measurement results in the real world. The higher the probability of Bayesian estimation theory, the higher the probability that the thing to be estimated will actually occur. That is, if the probability of Bayesian estimation theory is high, the estimated three-dimensional models M S and M D are similar to an actual subject.
- 3D model generation system 1 determines the overall deformation degree and partial reconstruction condition of the three-dimensional model M T.
- D, and a three-dimensional model M S is utilizing three evaluation values of the evaluation value E 0 to avoid an abrupt change that not be the real, so as to quickly generate an accurate three-dimensional model ing.
- FIG. 6 is a functional block diagram illustrating an example of functions realized by the three-dimensional model generation system 1.
- the determining unit 37, the history recording unit 38, the body shape deforming unit 39, the skeleton determining unit 40, the partial deforming unit 41, and the body part specifying unit 42 are realized by the three-dimensional model generation device 20.
- the main process of the shape estimation process is executed by the body shape determination unit 37 and the skeleton determination unit 40
- the main process of the geometric reconstruction process is executed by the partial deformation unit 41.
- the data storage unit 30 is realized mainly by the storage unit 22.
- Data storage unit 30 the three-dimensional model M S, and stores the data for generating the M D.
- the following data will be described as an example of data stored in the data storage unit 30.
- Skeletal history data storing history of skeleton parameters (details will be described later)
- Body history data (4) storing history of body parameters (details will be described later)
- vertex movement amount physical history data (7 for storing the details will be described later
- movement amount history data (5) history of the three-dimensional model M T template data (6) 3-dimensional model that defines the M S for storing a history of) 3 reconstruction historical data store historical dimension model M D
- the data stored in the data storage unit 30 is not limited to the above example.
- the data storage unit 30 may store data indicating the basic attitude data and the skeleton J T for storing the history of the detected posture information J.
- the depth acquisition unit 31 is realized mainly by the control unit 21.
- the depth acquisition unit 31 acquires depth information I (FIG. 3) on the subject detected by the depth detection device 10.
- the depth acquisition unit 31 since the three-dimensional model generation device 20 is directly connected to the depth detection device 10, the depth acquisition unit 31 directly receives the depth information I from the depth detection device 10.
- the depth information I may be received indirectly via the device.
- the depth acquisition unit 31 stores the acquired depth information in the depth history data.
- the skeleton acquisition unit 32 is realized mainly by the control unit 21.
- the skeleton acquisition unit 32 acquires the skeleton parameter of the subject based on the depth information I detected by the depth detection device 10.
- the skeleton parameter is information relating to the skeleton of the subject and is information indicating the positional relationship between the joints.
- the detected posture information J in FIG. 4 may be used as it is as a skeleton parameter, here, the case where the skeleton acquisition unit 32 uses the angle of each joint indicated by the detected posture information J as a skeleton parameter will be described.
- the skeleton acquisition unit 32 stores the acquired skeleton parameter in the skeleton history data.
- FIG. 7 is a diagram showing skeleton parameters.
- the skeleton acquisition unit 32 acquires the angle ⁇ j ⁇ of each joint calculated based on the three-dimensional coordinates of each joint indicated by the detected posture information J as a skeleton parameter.
- the skeleton parameter ⁇ j ⁇ is composed of 12 angles.
- ⁇ t j ⁇ when referring to the skeleton parameter ⁇ j ⁇ at a certain time t, it is described as ⁇ t j ⁇ .
- the body parameter acquisition unit is realized mainly by the control unit 21.
- the body parameter acquisition unit acquires body parameters.
- those three-dimensional model M T is deformed by body type parameter is a three-dimensional model M S.
- Deformation here means changing the position and shape of the surface, for example, changing the overall size or changing the partial shape.
- the body shape obtaining unit 33 stores the obtained body shape parameter in the body shape history data.
- the body type parameter indicates the characteristics of the body type of the subject, and indicates the size of the whole body or each part.
- the body type parameter includes respective numerical values such as height, chest circumference, waist, width of the garment, inseam, arm length, or foot size.
- the body type parameter is a combination of a plurality of values, and is described with a symbol ⁇ k ⁇ . k is a number for identifying an item included in the body type parameter.
- ⁇ 1 is height
- ⁇ 2 is chest circumference
- ⁇ 3 is waist, and so on.
- the body type parameter may be only a single value, not a combination of a plurality of values.
- Figure 8 is a diagram showing how the three-dimensional model M T is deformed by integrated parameters ⁇ k ⁇ .
- the three-dimensional model M T is deformed so as to type indicated by the type parameter ⁇ beta k ⁇ .
- three-dimensional model M T is the height becomes lower as the beta 1 is small, deformed so as tall as the beta 1 is large becomes high.
- three-dimensional model M T is chest becomes shorter as beta 2 is small, deformed such as beta 2 is larger chest becomes longer.
- three-dimensional model M T is West becomes thinner as the beta 3 is small, deformed so as waist as beta 3 is large becomes thick.
- the type parameters ⁇ k ⁇ , 3-dimensional model M T of the shape of each part (i.e., the vertex position) and, the relationship is assumed to be previously stored in the data storage unit 30.
- This relationship may be in the form of a mathematical expression, in the form of a table, or may be described as a program code.
- the three-dimensional model M T is defined to be the type of integrated parameters ⁇ k ⁇ . Described later type deforming portion 39, so that the shape associated with the integrated parameter ⁇ beta k ⁇ , deforming the part of the three-dimensional model M T.
- 3-dimensional model M T is deformed.
- the body type parameter ⁇ k ⁇ at a certain time t it is described as ⁇ t k ⁇ .
- the body parameter ⁇ 0 k ⁇ at that time is an initial value, and the three-dimensional model of the template directly becomes the value indicating the type of M T.
- the body parameter ⁇ 0 k ⁇ may be stored in the data storage unit 30 in advance.
- the body shape acquisition unit 33 acquires the body parameter ⁇ 0 k ⁇ of the initial value stored in the data storage unit 30.
- the posture changing unit 34 is realized mainly by the control unit 21.
- Posture change unit 34 on the basis of the skeleton parameter ⁇ theta j ⁇ , changing the attitude of the deformable three-dimensional model M T by integral parameters ⁇ k ⁇ .
- Posture change unit 34 so that the posture indicated by the skeleton parameter ⁇ theta j ⁇ , changes the positional relationship between each part of the 3D model M T.
- the posture is a positional relationship between body parts, and here is a positional relationship between joints.
- FIG. 9 and 10 are diagrams showing how the attitude of the three-dimensional model M T varies based on the skeleton parameter ⁇ j ⁇ .
- the posture change unit 34 the basic position (e.g., upright) to set inside the 3D model M T skeletons J T of.
- Positional relationship between a plurality of joints that constitute the skeleton J T may be determined in advance to show the orientation of the three-dimensional model M T is a template.
- the three-dimensional model M T when the skeleton J T is set each vertex in the 3D model M T is associated with one of the joints of the skeleton J T.
- the vertex associated with that joint is moved so as to maintain the joint and a predetermined positional relationship. That is, a vertex associated with a certain joint is linked to that joint.
- Posture change unit 34 the positional relationship of each joint indicated skeletal parameters ⁇ theta j ⁇ , and the positional relation of the joints of the skeleton J T, it is to correspond, deforming the skeleton J T.
- the positional relationship corresponds to the fact that the positions of these two joints match or the positional deviation is less than a predetermined value. Is to allow some positional displacement, the distance between the actual joints of the subject, the distance between the joints of the skeleton J T, but not necessarily match, that match the three-dimensional coordinates completely Because it is difficult.
- the posture changing unit 34 causes the skeleton J to have an angle of each joint of the skeleton J T that matches or shifts below the reference with the skeleton parameter ⁇ j ⁇ . T is deformed.
- the posture change unit 34 deforms the skeleton J T, and subject orientation it is possible to obtain a three-dimensional model M S of similar attitude.
- the detection range specifying unit 35 is realized mainly by the control unit 21. Detection range identification unit 35, based on the depth information I, from three-dimensional model M S, identifies a portion corresponding to the detection range of the depth detection device 10.
- the detection range is a detectable range on the surface of the subject, and is a place where the infrared rays of the depth detection device 10 are reflected.
- the detection range identification unit 35 based on the depth information I, obtains the orientation of the depth detection device 10 in the three-dimensional space, and the acquired orientation, and the orientation of the surface of the three-dimensional model M S, the relationship
- the detection range is specified based on the result.
- FIG. 11 is an explanatory diagram for explaining the processing of the detection range specifying unit 35.
- the line-of-sight direction V of the depth detection device 10 that is, the infrared irradiation direction
- Detection range identification unit 35 based on the depth information I, obtains the viewpoint O V and the viewing direction V in a three-dimensional space.
- Viewpoint O V is the origin of the viewpoint coordinate system
- the viewing direction V is the Z-axis of the viewpoint coordinate system (Incidentally, the coordinate axis in FIG. 11 is a coordinate axis in the world coordinate system).
- the detection range specifying unit 35 acquires the position O V and the line-of-sight direction V of the depth detection device 10 in the three-dimensional space based on the depth of the point of interest (that is, the center pixel of the image) of the depth information I.
- the detection range specifying unit 35 sets the vertex v i within the detection range where the normal line ni and the line-of-sight direction V face each other (for example, the angle formed by them is 90 ° or more), and sets the other points outside the detection range.
- the surface shape acquisition unit 36 is realized mainly by the control unit 21.
- the surface shape acquisition unit 36 acquires the surface shape of the subject (shown in FIG. 12 described later) based on the depth information I.
- the surface shape of the subject is the shape of the portion detected by the depth detection device 10 in the subject, and is a region that appears in the depth information I as the depth.
- the surface shape of the subject is a three-dimensional coordinate group obtained by performing a predetermined matrix transformation on the depth information I, and is obtained by converting the depth of each pixel of the depth information I into a three-dimensional space.
- the surface shape acquired by the surface shape acquisition unit 36 is only the front surface of the subject and does not include the back surface.
- the body shape determination unit 37 is realized mainly by the control unit 21.
- the body type determining unit 37 determines body type parameters based on the difference between the surface shape of the three-dimensional model whose posture has been changed by the posture changing unit 34 and the surface shape of the subject.
- the difference in shape is a positional difference in a three-dimensional space, for example, a distance between vertices.
- the body shape determination unit 37 changes the body shape parameter so that the difference in shape becomes small. That is, the body type determination unit 37 changes the body type parameter so that the difference after changing the body type parameter is smaller than the difference before changing the body type parameter.
- FIG. 12 is an explanatory diagram of the processing of the body type determination unit 37.
- Figure 12 shows a three-dimensional model M S by the solid line shows the surface S OB of the object by a broken line.
- the evaluation value E S can be said to be an evaluation value E S for comprehensively evaluating the distance d i between the three-dimensional coordinates of each vertex v i and the three-dimensional coordinates of the point p i corresponding to the vertex v i. .
- the evaluation value E S indicates the high accuracy of the three-dimensional model M S.
- type determination unit 37 by substituting the distance d i in a predetermined equation to obtain the evaluation value E S.
- E S the evaluation value at a certain time t is indicated, it is described as E t S (X t ).
- Weighting factor mu t i of Equation 1 the weighting factor (here, 0 or 1 or less).
- Weighting factor mu t i is not a body (e.g., such as clothes wrinkles) deviation degree of the portion that seems to have a coefficient to reduce or eliminate the influence on the evaluation value E t S (X t).
- the degree of displacement of the part that is not considered to be a body cannot be trusted in obtaining the evaluation value E t S (X t ).
- to reduce the weight of the portion seems to have used a weighting factor mu t i. Method of obtaining the weighting factor mu t i will be described later.
- V t i of formula 1 are the vertices of the three-dimensional model M S at time t.
- p t i is the point closest to the vertex v t i among the points on the surface S OB of the subject.
- vis (v t i , p t i ) is a function including a conditional expression. If the vertex v t i is within the detection range, the distance between the vertex v t i and the point p t i is returned, and the vertex v t i If is outside the detection range, a predetermined value ⁇ S is returned.
- the reason why the predetermined value ⁇ S is returned is that when the vertex v t i is out of the detection range, the point p t i corresponding to the vertex v t i does not exist and the distance cannot be obtained.
- the predetermined value ⁇ S may be an arbitrary value, but if it is set to 0, the three-dimensional model may be reduced, and may be about 0.1.
- the body shape determination unit 37 obtains a distance from the point p t i or a predetermined value ⁇ S for each vertex v t i , and uses the sum total multiplied by the weighting coefficient ⁇ t i as the evaluation value E t. Acquired as S (X t ).
- integrated determining unit 37 to optimize the integrated parameters ⁇ beta k ⁇ on the basis of the evaluation value E S.
- type determination unit 37 a plurality selects candidates for integrated parameters ⁇ beta k ⁇ , determined from among them, the value of the candidate to the evaluation value E S below the minimum or threshold, as integrated parameter ⁇ beta k ⁇ To do.
- the optimization itself may use various optimization algorithms, but here, a case where a particle swarm optimization method is used will be described as an example.
- the body shape determination unit 37 selects a plurality of body shape parameter ⁇ k ⁇ candidates based on the particle swarm optimization algorithm.
- the body type determination unit 37 initializes a body type parameter ⁇ k ⁇ based on an evaluation value E 0 of Equation 2 described later, and selects a plurality of candidates based on the initialized ⁇ k ⁇ . For example, a candidate whose difference from the initialized ⁇ k ⁇ falls within a predetermined range is a candidate.
- the number of candidates may be arbitrary, for example, about 30.
- Type determination unit 37 calculates an evaluation value E S for each candidate, selects one of the plurality of candidates.
- type determination unit 37 the value of the candidate which minimizes the evaluation value E S, is selected as a new type parameters ⁇ k ⁇ .
- Type determination unit 37 updates the three-dimensional model M S passes the integrated parameters ⁇ beta k ⁇ after optimized integrated deformation portion 39.
- type determination unit 37 when determining the type parameter ⁇ beta k ⁇ , the body part
- the corresponding difference weighting ie, the weighting factor ⁇ t i
- the body shape determination unit 37 calculates the surface shape of the three-dimensional model in the detection range and the surface shape of the subject. Based on the difference, the body type parameter is determined.
- the body shape determination unit 37 Since the depth information I is repeatedly acquired at a predetermined frame rate, the body shape determination unit 37 repeatedly executes the optimization process as described above. Type determination unit 37 may execute only optimized once at a single frame, repeat the optimization process until the visit is the next frame may be made smaller the evaluation value E S as possible. Furthermore, type determination unit 37 holds the integrated parameters ⁇ beta k ⁇ of when the evaluation value E S becomes less than the threshold, until obtaining depth information I of the next frame, not any further optimization You may do it.
- the body shape determination unit 37 may obtain an initial value of the body shape parameter ⁇ k ⁇ in a certain frame from past values to suppress fluctuations in the body shape parameter ⁇ k ⁇ .
- This initial value may be obtained by substituting the past body type parameter ⁇ k ⁇ into a predetermined mathematical expression, but in this embodiment, the body type parameter ⁇ k ⁇ is based on the evaluation value E 0 .
- the evaluation value E 0 can be represented by a function having a body type parameter ⁇ k ⁇ as a variable, for example, as in the following Expression 2.
- D in Equation 2 is a distance in a principal component analysis space (a so-called PCA space; here, a k-dimensional space in which a body type parameter ⁇ k ⁇ is indicated).
- r is an arbitrary time.
- Right side of Equation 2 and the candidate value of the type parameter ⁇ beta k ⁇ , the integrated parameters ⁇ t k ⁇ at each time t, a distance sum.
- the left side of Equation 2 indicates how much a candidate value for a certain body parameter ⁇ k ⁇ deviates from the past body parameter ⁇ k ⁇ as a whole.
- the body shape determination unit 37 performs initialization based on ⁇ k ⁇ that makes the evaluation value E 0 ( ⁇ k ⁇ ) minimum or less than the threshold value.
- the evaluation value E 0 ( ⁇ k ⁇ ) is minimized when the past body type parameter ⁇ k ⁇ is an average value, so the body type determination unit 37 determines the average of the past body type parameters ⁇ k ⁇ . Initialization is based on the value. In this way, it is possible to prevent the body parameter ⁇ k ⁇ from changing suddenly only at a certain time t.
- This average value may be an average of all past periods, or may be an average of the most recent predetermined period.
- a coefficient may be provided according to time t. In this case, it is good also as a weighted average which made weighting large, so that it is near the present time.
- integrated type determination unit 37, a type parameter ⁇ beta k ⁇ , and history recorded by the history recording unit 38, based on, will determine the type parameter ⁇ beta k ⁇ .
- the history recording unit 38 is realized mainly by the control unit 21.
- the history recording unit 38 records the history of the body type parameter ⁇ k ⁇ in the data storage unit 30.
- the history recording unit 38 stores the body type parameter ⁇ k ⁇ in the body type history data in time series, or stores the body type parameter ⁇ k ⁇ in the body type history data in association with the optimized time.
- the body deforming unit 39 is realized mainly by the control unit 21.
- Type deformation part 39 based on the integrated parameters determined by the type parameter determination unit, deforming the three-dimensional model M T.
- Integrated deformable portion 39 deforms by changing the position of the vertices of the 3D model M T. Vertex, so that to define individual polygon, the position of the vertex is changed, it is changed at least one of the position and shape of each part of the 3D model M T. That is, the integrated deformation portion 39 will be moved or deformed polygons constituting the three-dimensional model M T.
- a triangular polygon will be described here, a quadrilateral or pentagon or more polygon may be used.
- Figure 13 is a diagram showing how the three-dimensional model M S is changed when the optimization is performed.
- the body shape determination unit 37 increases ⁇ 1 indicating the height. stature of the three-dimensional model M S so as to become possible to optimize high Te.
- the body shape deforming unit 39 updates the 3D model M S based on the optimized body shape parameter ⁇ k ⁇ , the 3D model M S approaches the actual height of the subject.
- the skeleton determination unit 40 is realized mainly by the control unit 21. Skeletal determination unit 40, and the surface shape of the three-dimensional model M S, based on the difference of the surface shape of the subject, to determine skeletal parameters ⁇ j ⁇ .
- Skeletal determination unit 40 and the surface shape of the three-dimensional model M S, based on the difference of the surface shape of the subject, to determine skeletal parameters ⁇ j ⁇ .
- skeletal determining section 40 Determines the skeleton parameter ⁇ j ⁇ based on the evaluation value E S and optimizes the skeleton parameter ⁇ j ⁇ .
- skeletal determination unit 40 the candidate of the skeleton parameter ⁇ theta j ⁇ a plurality elected determined from among them, the value of the candidate to the evaluation value E S below the minimum or threshold, as a scaffold parameter ⁇ theta j ⁇ To do.
- the optimization itself may use various optimization algorithms, but here, the optimization is determined using an optimization method based on the least square method.
- the skeleton determination unit 40 selects a plurality of skeleton parameter ⁇ j ⁇ candidates based on the least square method.
- the candidate is a candidate whose difference from the current skeleton parameter ⁇ j ⁇ falls within a predetermined range.
- the number of candidates may be arbitrary.
- Type determination unit 37 calculates an evaluation value E S for each candidate, selects one of the plurality of candidates. For example, type determination unit 37, the value of the candidate which minimizes the evaluation value E S, is selected as a new skeletal parameters ⁇ j ⁇ .
- Posture change unit 34 on the basis of the skeleton parameter ⁇ theta j ⁇ which are determined by the skeleton determination unit 40, and updates the posture of the 3D model M S.
- Partial deformation portion 41 includes a part of the surface shape of the three-dimensional model M S deformed by integrated deformable portion 39, and the surface shape of the object corresponding to the portion, based on the difference, to deform the portion.
- partial deformation portion 41 has a three-dimensional model M S, and the surface shape of the subject, based on the partial misalignment degree of obtains the vertex movement amount on the position to have each vertex of the three-dimensional model M S .
- Vertex movement amount in order to reproduce the subject partial details, showing where to may be moved vertices v i of the three-dimensional model M S.
- FIG. 14 is an explanatory diagram of the vertex movement amount.
- Figure 14 is an enlarged view of the vicinity of the fuselage surface S OB of the three-dimensional model M S and the object.
- Vertex movement amount to move the vertex v i may be any identifiable information, but as shown in FIG. 14, in this embodiment, the vertex movement amount, the direction with respect to the normal n i vertex v i illustrating the case of a movement amount eta i.
- the vertex movement amount is not limited to the movement amount ⁇ i , and may indicate the three-dimensional coordinates of the movement destination vertex v i itself, or may be vector information indicating the movement direction and the movement amount.
- Partial deformation portion 41, the three-dimensional coordinates of the vertex v i by substituting the 3-dimensional coordinates of a vertex v i closest point p i within the surface shape of the subject, to a predetermined formula, the vertex moving amount ⁇ Get i .
- This mathematical expression is determined so that the vertex movement amount ⁇ i increases as the deviation between the three-dimensional coordinates of the vertex v i and the three-dimensional coordinates of the point p i increases, and the vertex movement amount ⁇ i decreases as the deviation decreases. It has been.
- the case where by using the evaluation value E D acquires vertex movement amount eta i.
- Evaluation value E D when the three-dimensional model M S is deformed based on the vertex movement amount eta i, an evaluation value for evaluating whether the deviation much and the surface S OB of the object.
- the evaluation value E D since that deform the 3D model M S based on the vertex movement amount eta i is described in the sign of "M D", the evaluation value E D, the accuracy of the three-dimensional model M D It can be said that it shows the height of.
- E t D when the evaluation value E D at a certain time t is indicated, it is described as E t D.
- the evaluation value E t D because changes in accordance with the vertex movement amount eta i, can be described to have a variable vertex movement amount eta i function E t D ( ⁇ i).
- partial deformation section 41 uses Equation 3 below, to obtain the evaluation value E t D ( ⁇ i).
- S t of the first term on the right side of the equation 3 is the surface of the three-dimensional model M S.
- the first term on the right side of Equation 3 represents the distance between the point p t i of the subject at a certain time t and the point v i ′ obtained by moving the vertex v t i in the direction of the normal n i by the vertex movement amount ⁇ i. those taking the sum is calculated for each individual vertex v t i.
- N (i) in the second term on the right side of Equation 3 is another vertex adjacent to a certain vertex v i , and ⁇ D is a hyperparameter.
- the second term on the right side of Equation 3 means a vertex v t i vertex movement amount eta i of a certain, a vertex movement amount eta j of adjacent vertices, the sum of the difference.
- the partial deformation unit 41 acquires the vertex movement amount ⁇ i of each vertex v i based on the evaluation value E D. Higher evaluation value E D is small, the difference between the surface shape of the three-dimensional model M D and the object is accuracy increases less, partial deformation section 41, the vertex moving distance to the evaluation value E D below the minimum or threshold Get ⁇ i .
- the vertex movement amount ⁇ i at a certain time t is indicated, it is described as ⁇ t i .
- the partial deformation unit 41 temporarily calculates the vertex movement amount ⁇ t i by using the following formula 4, and then the evaluation value E t D ( ⁇ i ) of the formula 3 is minimized.
- the vertex movement amount ⁇ t i is corrected as follows.
- Equation 4 is an equation for calculating a weighted average of the deviation between the vertex v t i and the point p t i at the current time t and the past vertex movement amount (in the latest frame, in Equation 4).
- ⁇ t ⁇ 1 and ⁇ ′ t are weighted average coefficients.
- the vertex movement amount ⁇ t i is temporarily calculated as shown in Equation 4 because the depth information I at a certain time t may be mixed with noise, so that only when the noise is mixed, the vertex movement amount ⁇ i is suddenly changed. changes, in order to prevent the three-dimensional model M D is dented sudden or protrudes take the average of the past, in order to avoid too only larger or smaller vertex movement amount eta i at a certain time is there.
- the partial deforming unit 41 temporarily calculates the vertex movement amount ⁇ i for each vertex using Expression 4. Then, the partial deformation unit 41 acquires a plurality of candidates based on the temporarily calculated vertex movement amount ⁇ i . This candidate only needs to be within a predetermined range with respect to the tentative calculation vertex movement amount ⁇ i . The number of candidates may be arbitrary.
- the partial deforming unit 41 calculates an evaluation value E t D ( ⁇ i) based on the candidates acquired for each vertex v i and determines the candidate having the minimum value as the vertex movement amount ⁇ i .
- partial deformation section 41 a vertex movement amount eta i for each vertex, based on the vertex movement amount eta i around the vertex decide. Therefore, partial deformation portion 41, each portion of the three-dimensional model M S, is modified based on the modification degree of the surroundings.
- partial deformation section 41 may be a deformed condition of each portion of the three-dimensional model M S, the difference between the deformation degree of the peripheral portion, to be less than a predetermined.
- the vertex movement amount ⁇ i is obtained by a weighted average with the past vertex movement amount, so the partial deformation unit 41 is based on the vertex movement amount ⁇ i acquired in the past, The vertex movement amount ⁇ i is acquired.
- the partial deformation unit 41 can set the difference between the deformation state of each part of the three-dimensional model and the past deformation state of the part to be less than a predetermined value.
- the partial deforming unit 41 changes the position of the vertex based on the vertex movement amount ⁇ i acquired for each vertex as described above.
- the partial deformation unit 41 moves each vertex so as to move to a position determined based on the vertex movement amount ⁇ i .
- the vertex movement amount eta i is indicative of the amount of movement of the normal direction
- partial deformation section 41 will position v i 'vertex v i is apart vertex movement amount eta i in the direction of the normal n i
- the three-dimensional coordinates of the vertex v i are changed. That is, the partial deformation portion 41 by deforming the three-dimensional model M S based on the vertex movement amount eta i, will produce a three-dimensional model M D.
- Figure 15 is a diagram showing a three-dimensional model M D.
- the three-dimensional model M D is is reproduced detailed shape such as clothes wrinkles and bags.
- the portion deformable portion 41, the RGB image depth detecting device 10 is taken, may be mapped to the surface of the three-dimensional model M D as a texture. Thus, it may be expressed like pattern or color of the surface of the object in the three-dimensional model M D.
- partial deformation section 41 of the three-dimensional model M S so as to change the shape of the portion identified by the detection range identification unit 35 It may be. Moreover, since the vertex movement amount eta i is repeatedly acquired, partial deformation section 41, each time a vertex movement amount is obtained, will change the position of each vertex of the three-dimensional model M S.
- the body part specifying unit 42 is realized mainly by the control unit 21.
- the body part specifying unit 42 specifies a body part corresponding to the body of the subject among the three-dimensional models M S and M D based on the three-dimensional models M S and M D or the surface shape of the subject.
- Body part is identified in order to determine the weighting factor mu t i of Equation 1.
- the body part is a part having a small change in shape and is a part showing a relatively hard object.
- the other part is a part having a large change in shape, and is a part showing a relatively soft object.
- the body part is a part where the background is exposed, and the other part is a part where wrinkles such as clothes and wrinkles approach (a part that moves in accordance with the movement of the body).
- the body portion specifying unit 42 specifies the body part based on the positional relationship and the time variation of the vertices of the 3D model M D.
- the body part specifying unit 42 may specify the position based on the positional deviation between the vertex at a certain time t and the surrounding vertices, or may specify the position based on the density of the vertices or the degree of dispersion.
- the body part specifying unit 42 may specify based on a change in position of a certain vertex over time.
- the body part specifying unit 42 acquires an evaluation value ⁇ related to the curvature, as shown in Equation 5 below, for each vertex of the three-dimensional model.
- the evaluation value ⁇ indicates a local curvature.
- the evaluation value ⁇ is a value between ⁇ 1 and 1 inclusive.
- a body portion specifying unit 42 if the standard deviation kappa t is less than the threshold value of the kappa of a vertex v i at a certain time t, and the area corresponding to the body it indicates that change is small, if greater, change Indicates that the area is large.
- K in Equation 5 is a Gaussian curvature
- H is an average curvature.
- K and H are calculated for each vertex based on the positional relationship between the vertex and the surrounding vertices, for example, based on the density of the vertices. Since these calculation methods are general, the details are described in “M.Meyer, M.Desbrun, P.Schroder, and AHBarr. Discrete differential-geometry operators for triangulated 2-manifolds.Mathematics and Visualization, pages 35- See 57, 2003 ”.
- a body portion specifying unit 42 if the standard deviation kappa t is equal to or larger than the threshold, a predetermined value of less than 1 weighting coefficient ⁇ t i ⁇ u (e.g., lambda u ⁇ 1) and, if the standard deviation kappa t is less than the threshold value if, to the weighting coefficient ⁇ t i 1.
- ⁇ t i weighting coefficient
- u e.g., lambda u ⁇ 1
- the standard deviation kappa t is less than the threshold value if, to the weighting coefficient ⁇ t i 1.
- a large standard deviation ⁇ t indicates that the change in curvature is large, so this part is estimated to be a soft object whose shape is easily changed.
- the standard deviation kappa t is small, it indicates that the change in curvature is small, this portion is estimated to shape changes hardly hard objects.
- FIG. 16 is a flowchart showing an example of processing executed in the three-dimensional model generation system.
- the processing shown in FIG. 16 is executed by the control units 11 and 21 operating according to programs stored in the storage units 12 and 22, respectively.
- the process described below is an example of a process executed by the functional block shown in FIG.
- the control unit 11 generates depth information I and detected posture information J based on the detection signal of the depth detection unit 14 (S10).
- the control unit 11 transmits the depth information I and the detected posture information J generated in S10 to the three-dimensional model generation device 20 (S11).
- the control unit 11 determines whether the next processing timing (frame) has arrived (S12). It is assumed that the frame rate is stored in the storage unit 12 in advance. The control unit 11 determines whether the current time has reached the start time of each frame by executing a time measurement process using a real-time clock.
- control unit 11 determines whether to end this processing (S13). In S ⁇ b> 13, the control unit 11 may determine whether a predetermined end condition (for example, a signal for ending the process from the three-dimensional model generation device 20 has been received) is satisfied. If it is not determined that the present process is to be terminated (S13; N), the process returns to S12. On the other hand, when it is determined that the present process is to be terminated (S13; Y), the present process is terminated.
- a predetermined end condition for example, a signal for ending the process from the three-dimensional model generation device 20 has been received
- the processing returns to S10. Thereafter, the depth detection device 10 periodically generates the depth information I and the detected posture information J and transmits them to the three-dimensional model generation device 20 until this processing is completed.
- the control unit 21 first, the control unit 21, based on the template data stored in the storage unit 22, placing the three-dimensional model M T in a virtual three-dimensional space (S20).
- the control unit 21, integrated parameters corresponding to the three-dimensional model M T ⁇ k ⁇ i.e., initial value
- This initial value may be stored in the storage unit 22 in advance.
- Control unit 21 based on the basic attitude data stored in the storage unit 22, sets the skeleton J T to the inside of the three-dimensional model M T disposed in S20 (S21).
- the 3D model generation apparatus 20 since it has not yet received depth information I and the detected posture information J from the depth detection device 10, the three-dimensional space, three-dimensional model M T is the basic position of the basic shape Is taking. In other words, body type and posture of the three-dimensional model M T is, does not resemble the actual subject of shape.
- the control unit 21 receives the depth information I and the detected posture information J from the depth detection device 10 (S22). In S22, the control unit 21 stores the received depth information I and detected posture information J in depth history data and skeleton history data, respectively. The control unit 21 performs shape estimation processing for estimating the shape of the subject's body (S23).
- FIG. 17 is a diagram showing details of the shape estimation process.
- the control unit 21 acquires the skeletal parameter ⁇ j ⁇ of each joint based on the detected posture information J received in S22 (S30).
- the control unit 21 executes the skinning process on three-dimensional model M T which is deformed in S31 (S32). By the process of S32, the surface of the three-dimensional model becomes smooth. Thus the three-dimensional model M S is generated. Information identifying a three-dimensional model M S at time t (the position of the vertex and the skeleton J T) is stored in the body history data.
- the control unit 21 initializes the body type parameter ⁇ k ⁇ based on the evaluation value E 0 (S33).
- the control unit 21 the history stored in the integrated history data stored in the storage unit 22 into Equation 2, the value of the evaluation value E 0 to a minimum (i.e., average) the latest type
- the control unit 21 acquires the three-dimensional coordinates of the subject based on the depth information received in S22. Then, the control unit 21 on the basis of the obtained integrated parameters ⁇ beta k ⁇ with particle swarm optimization algorithm S33, acquires a plurality of candidates of the integrated parameters ⁇ k ⁇ .
- evaluation value E S performs the optimization by the value of the candidate that minimizes the integrated parameters ⁇ k ⁇ .
- the optimized body shape parameter ⁇ k ⁇ is stored in the body shape history data.
- the control unit 21 optimizes the skeleton parameter ⁇ j ⁇ based on the evaluation value E S (S36).
- S36 the control unit 21 acquires a plurality of candidates for the skeleton parameter ⁇ j ⁇ based on the latest skeleton parameter ⁇ j ⁇ and the least square method.
- evaluation value E S performs the optimization by updating the skeleton parameter ⁇ theta j ⁇ such that a value of a candidate to be minimized.
- the optimized skeleton parameter ⁇ j ⁇ is stored in the skeleton history data.
- the control unit 21 updates the three-dimensional model based on the skeleton parameter ⁇ j ⁇ optimized in S36 (S37).
- evaluation value E S repeats processes from S33 ⁇ S37 becomes sufficiently small.
- control unit 21 executes a geometric reconstruction process for expressing partial details of the subject (S24). Note that the geometric reconstruction process may be repeatedly executed within a frame or may be executed only once.
- FIG. 18 is a diagram showing details of the geometric reconstruction processing.
- the control unit 21 based on the depth information I acquired in S22, to identify the detection range of the three-dimensional model M S (S40).
- the control unit 21 obtains the viewing direction V of the depth detection device 10 on the basis of the depth information I, based on the relationship between the normal n i of each vertex v i of the three-dimensional model M S Detection Identify the range.
- the control unit 21 acquires the vertex movement amount ⁇ i ⁇ for each vertex in the detection range based on the evaluation value E D (S41).
- the control unit 21 acquires the evaluation value E D based on the three-dimensional coordinates of the vertex within the detection range, the three-dimensional coordinates of the subject, and Expression 3. Then, the control unit 21 acquires the temporary vertex movement amount ⁇ i ⁇ based on Expression 4, and corrects it to the vertex movement amount ⁇ i ⁇ that minimizes the evaluation value E D.
- the vertex movement amount ⁇ i ⁇ at time t is stored in the movement amount history data.
- the control unit 21 rearranges the vertices in the detection range based on the vertex movement amount ⁇ i ⁇ acquired in S41 (S42).
- the model M D is generated 3-dimensional.
- Information identifying a three-dimensional model M D at time t (vertex and position of the skeleton J T) is stored in reconstructed historical data.
- the control unit 21 Based on the vertex movement amount ⁇ i ⁇ acquired in S41, the control unit 21 acquires the weighting coefficient ⁇ i of the vertex within the detection range (S43). In S43, the control unit 21, the three-dimensional model M S based on Equation 5, and distinguish between the body portion and the other portion of M D, based on the result of the distinction, the weighting factor for each vertex mu i Will get. Weighting factor mu i obtained in S43 is used in calculating the evaluation value E D shape estimation processing S23.
- the control unit 21 determines whether or not to end this process (S25). When it is not determined to end (S25; N), the process returns to the process of S22, and when the depth information I and the detected attitude information J of the next frame are received, the shape estimation based on the latest depth information I and the detected attitude information J Processing and geometric reconstruction processing are performed. On the other hand, when it is determined to end (S25; Y), this processing ends.
- the three-dimensional model generation system 1 described above, after closer the orientation of the three-dimensional model M T of the posture of the subject, because it produces a three-dimensional model M S determines the type parameter ⁇ beta k ⁇ , the three-dimensional model M S accurate can be quickly generated. That is, it is not necessary to the image analysis of a large number of depth detection device 10, it is unnecessary to perform complex processing, it is possible to generate a rapid three-dimensional model M S. Furthermore, since the integrated parameters ⁇ beta k ⁇ is optimized so as to approach the actual body line of the object, it is possible to improve the accuracy of the three-dimensional model M S.
- the skeleton parameter ⁇ theta j ⁇ is optimized to the deviation of the actual framework of the subject is small, it is possible to approximate the orientation of the three-dimensional model M S of the posture of the subject, the three-dimensional model M S Accuracy can be further increased.
- the skeleton parameter ⁇ theta j ⁇ be close to the actual subject both type and orientation Can do.
- the backbone parameters ⁇ theta j ⁇ After optimizing the integrated parameters ⁇ beta k ⁇ , by then it is repeated again these optimizations, it gradually reduces the deviation between the three-dimensional model M S and the object Can do.
- the three-dimensional model M S by partially modified based on the vertex movement amount eta i, it is possible to approximate the shape of each portion of the three-dimensional model M D partial shape of the subject, three-dimensional it is possible to improve the accuracy of the model M D. For example, even in the case of using a single depth sensing device 10 as in the embodiment, for the surface where the depth detection device 10 has detected, to reproduce details such as clothes wrinkles three-dimensional model M D The body line can be reproduced for other surfaces.
- the target of the geometric reconstruction process as an area within the detection range, it is not necessary to execute the process up to a portion that does not need to be deformed, so the processing load on the three-dimensional model generation device 20 is reduced. together, it can be generated more quickly three-dimensional model M D.
- the vertex moving amount eta i of each vertex order determined based on the vertex movement amount around the vertex can be modified based on the shape of each portion of the three-dimensional model M S around the shape. As a result, it is possible to prevent a state such that dented or protruded unnaturally only partial, it is possible to improve the accuracy of the three-dimensional model M D.
- the vertex moving amount eta i of each vertex so determined based on past vertex movement of the vertex can be deformed based on each part of the 3D model M S in the past in the shape of the part. As a result, it is possible to prevent a specific portion from being suddenly deformed only at a certain point in time.
- the portion other than the body can be suppressed the influence on the evaluation value E D, it is not an important part in order to estimate the body lines of the object in the evaluation value E D The influence that it has can be suppressed.
- the weighting factor mu i the portion other than the body can be suppressed the influence on the evaluation value E D, it is not an important part in order to estimate the body lines of the object in the evaluation value E D The influence that it has can be suppressed.
- the integrated parameters determined based on a history of type parameters ⁇ beta k ⁇ it is possible to prevent such that integrated parameters ⁇ beta k ⁇ becomes suddenly large. As a result, it is possible to prevent a certain unnatural situation where type is abruptly changed for a specific time by the three-dimensional model M S.
- the skeleton parameter ⁇ j ⁇ is optimized after optimizing the body type parameter ⁇ k ⁇ has been described, but the order of these may be reversed.
- the case where both the body type parameter ⁇ k ⁇ and the skeleton parameter ⁇ j ⁇ are optimized has been described, only one of them may be optimized.
- the body part may be identified by using a three-dimensional model M S, may be specified using the depth information I.
- the depth information I When using a three-dimensional model M S, like the method described in the embodiment, it may be as specified by the positional relationship and the time change of the vertices.
- the depth information I when using the depth information I is the body part only the nearest vertex from each point on the surface S OB of the object may be other than the body portion other than it.
- the detected posture information J may be generated by the three-dimensional model generation device 20.
- the three-dimensional model generation device 20 generates the detected posture information J based on the depth information I.
- the skeleton parameter ⁇ j ⁇ may be generated by the depth detection device 10 and transmitted to the three-dimensional model generation device 20.
- the frame rate is not particularly defined, and the depth detection device 10 may generate the depth information I and the detected posture information J irregularly.
- the subject may be an animal other than a human (for example, a dog or a cat).
- each function may be realized by the depth detection apparatus 10 or another computer.
- each function may be shared by a plurality of computers of the three-dimensional model generation system 1.
- functions other than the skeleton acquisition unit 32, the posture change unit 34, the surface shape acquisition unit 36, the body shape determination unit 37, and the body shape deformation unit 39 may be omitted.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
Description
以下、本発明に関わる3次元モデル生成システムの実施形態の例を説明する。図1は、3次元モデル生成システムにおいて被写体が撮影される様子を示す図であり、図2は、3次元モデル生成システムのハードウェア構成を示す図である。図1及び図2に示すように、3次元モデル生成システム1は、深度検出装置10及び3次元モデル生成装置20を含む。深度検出装置10及び3次元モデル生成装置20は、有線通信又は無線通信によりデータ送受信可能に接続される。
図5は、3次元モデル生成システム1が実行する処理の概要を示す図である。図5のtは時間軸である。図5に示すように、深度検出装置10は、3次元モデル生成装置20に深度情報Iと検出姿勢情報Jを定期的に送信する。3次元モデル生成装置20は、各フレームの開始時点で深度情報Iと検出姿勢情報Jを受信すると、被写体の身体の形状(ボディライン)を推定する形状推定処理と、推定したボディラインから部分的な詳細形状(洋服のしわなど)を再構成する幾何学的再構成処理と、を実行する。形状推定処理と幾何学的再構成処理は、毎フレーム実行される。
図6は、3次元モデル生成システム1で実現される機能の一例を示す機能ブロック図である。図6に示すように、本実施形態では、データ記憶部30、深度取得部31、骨格取得部32、体型取得部33、姿勢変化部34、検出範囲特定部35、表面形状取得部36、体型決定部37、履歴記録部38、体型変形部39、骨格決定部40、部分変形部41、及び身体部分特定部42が、3次元モデル生成装置20で実現される場合を説明する。なお、ここでは、体型決定部37及び骨格決定部40により形状推定処理の主たる処理が実行され、部分変形部41により幾何学的再構成処理の主たる処理が実行される。
データ記憶部30は、記憶部22を主として実現される。データ記憶部30は、3次元モデルMS,MDを生成するためのデータを記憶する。ここでは、データ記憶部30が記憶するデータの一例として、下記のデータを説明する。
(1)深度情報Iの履歴を格納する深度履歴データ
(2)骨格パラメータ(詳細後述)の履歴を格納する骨格履歴データ
(3)体型パラメータ(詳細後述)の履歴を格納する体型履歴データ
(4)頂点移動量(詳細後述)の履歴を格納する移動量履歴データ
(5)3次元モデルMTを定義したテンプレートデータ
(6)3次元モデルMSの履歴を格納する身体履歴データ
(7)3次元モデルMDの履歴を格納する再構成履歴データ
深度取得部31は、制御部21を主として実現される。深度取得部31は、深度検出装置10が検出した被写体の深度情報I(図3)を取得する。本実施形態では、3次元モデル生成装置20が深度検出装置10と直接的に接続されているので、深度取得部31は、深度検出装置10から深度情報Iを直接的に受信するが、他の機器を経由して深度情報Iを間接的に受信してもよい。深度取得部31は、取得した深度情報を深度履歴データに格納する。
骨格取得部32は、制御部21を主として実現される。骨格取得部32は、深度検出装置10が検出した深度情報Iに基づいて、被写体の骨格パラメータを取得する。骨格パラメータは、被写体の骨格に関する情報であり、各関節の位置関係を示す情報である。図4の検出姿勢情報Jをそのまま骨格パラメータとして用いてもよいが、ここでは、骨格取得部32は、検出姿勢情報Jが示す各関節の角度を骨格パラメータとして用いる場合を説明する。骨格取得部32は、取得した骨格パラメータを骨格履歴データに格納する。
体型パラメータ取得部は、制御部21を主として実現される。体型パラメータ取得部は、体型パラメータを取得する。本実施形態では、3次元モデルMTが体型パラメータにより変形したものが、3次元モデルMSである。ここでの変形とは、表面の位置や形状を変えることであり、例えば、全体的なサイズを変えたり、部分的な形状を変えたりすることである。体型取得部33は、取得した体型パラメータを体型履歴データに格納する。
姿勢変化部34は、制御部21を主として実現される。姿勢変化部34は、骨格パラメータ{θj}に基づいて、体型パラメータ{βk}により変形可能な3次元モデルMTの姿勢を変化させる。姿勢変化部34は、骨格パラメータ{θj}が示す姿勢になるように、3次元モデルMTの各パーツの位置関係を変化させる。なお、姿勢は、身体の部位の位置関係であり、ここでは、各関節の位置関係である。
検出範囲特定部35は、制御部21を主として実現される。検出範囲特定部35は、深度情報Iに基づいて、3次元モデルMSの中から、深度検出装置10の検出範囲に対応する部分を特定する。検出範囲は、被写体の表面のうち検出可能な範囲であり、深度検出装置10の赤外線が反射する場所である。例えば、検出範囲特定部35は、深度情報Iに基づいて、3次元空間における深度検出装置10の向きを取得し、当該取得した向きと、3次元モデルMSの表面の向きと、の関係に基づいて検出範囲を特定する。
表面形状取得部36は、制御部21を主として実現される。表面形状取得部36は、深度情報Iに基づいて、被写体の表面形状(後述の図12で示す。)を取得する。被写体の表面形状は、被写体のうち、深度検出装置10が検出した部分の形状であり、深度情報Iに深度として表れた領域である。被写体の表面形状は、深度情報Iに所定の行列変換を施して得られる3次元座標群であり、深度情報Iの各ピクセルの深度を3次元空間に変換したものである。深度検出装置10と被写体の位置関係が図1のような場合は、表面形状取得部36が取得する表面形状は、被写体の前面のみであり、背面は含まれない。
体型決定部37は、制御部21を主として実現される。体型決定部37は、姿勢変化部34により姿勢が変化した3次元モデルの表面形状と、被写体の表面形状と、の差に基づいて、体型パラメータを決定する。形状の差とは、3次元空間における位置的な違いであり、例えば、頂点間の距離である。体型決定部37は、形状の差が小さくなるように、体型パラメータを変化させる。即ち、体型決定部37は、体型パラメータを変化させる前の差よりも、体型パラメータを変化させた後の差が小さくなるように体型パラメータを変化させる。
履歴記録部38は、制御部21を主として実現される。履歴記録部38は、体型パラメータ{βk}の履歴をデータ記憶部30に記録する。履歴記録部38は、体型パラメータ{βk}を時系列的に体型履歴データに格納したり、体型パラメータ{βk}を最適化した時間と関連付けて体型履歴データに格納したりする。
体型変形部39は、制御部21を主として実現される。体型変形部39は、体型パラメータ決定部により決定された体型パラメータに基づいて、3次元モデルMTを変形させる。体型変形部39は、3次元モデルMTの頂点の位置を変えることで変形させる。頂点は、個々のポリゴンを定義する点なので、頂点の位置が変わると、3次元モデルMTの各パーツの位置及び形状の少なくとも一方が変わる。即ち、体型変形部39は、3次元モデルMTを構成するポリゴンを移動又は変形させることになる。なお、ここでは、三角形のポリゴンを利用する場合を説明するが、四角形や五角形以上のポリゴンを利用してもよい。
骨格決定部40は、制御部21を主として実現される。骨格決定部40は、3次元モデルMSの表面形状と、被写体の表面形状と、の差に基づいて、骨格パラメータ{θj}を決定する。ここでは、骨格取得部32が検出姿勢情報Jをもとに取得した骨格パラメータ{θj}だけでは、3次元モデルMSの姿勢を被写体の姿勢に一致させることが難しいので、骨格決定部40は、評価値ESに基づいて骨格パラメータ{θj}を決定して、骨格パラメータ{θj}を最適化させるようにしている。
部分変形部41は、制御部21を主として実現される。部分変形部41は、体型変形部39により変形した3次元モデルMSの表面形状の一部と、当該部分に対応する被写体の表面形状と、の差に基づいて、当該部分を変形させる。例えば、部分変形部41は、3次元モデルMSと、被写体の表面形状と、の部分的なずれ具合に基づいて、3次元モデルMSの各頂点があるべき位置に関する頂点移動量を取得する。頂点移動量は、被写体の部分的な詳細を再現するために、3次元モデルMSの頂点viをどこに移動させればよいかを示すものである。
身体部分特定部42は、制御部21を主として実現される。身体部分特定部42は、3次元モデルMS,MD又は被写体の表面形状に基づいて、3次元モデルMS,MDのうち、被写体の身体に対応する身体部分を特定する。身体部分は、式1の重み付け係数μt iを決定するために特定される。身体部分は、形状の変化が小さい部分であり、比較的固い物体を示す部分である。それ以外の部分は、形状の変化が大きい部分であり、比較的柔らかい物体を示す部分である。例えば、身体部分は、地肌が露出している部分であり、それ以外の部分は、洋服や鞄などのしわが寄る部分(身体の動きに合わせて動く部分)である。
図16は、3次元モデル生成システムにおいて実行される処理の一例を示すフロー図である。図16に示す処理は、制御部11,21が、それぞれ記憶部12,22に記憶されたプログラムに従って動作することによって実行される。下記に説明する処理は、図6に示す機能ブロックにより実行される処理の一例である。
なお、本発明は、以上に説明した実施の形態に限定されるものではない。本発明の趣旨を逸脱しない範囲で、適宜変更可能である。
Claims (11)
- 深度検出装置が検出した被写体の深度情報に基づいて、骨格パラメータを取得する骨格取得手段と、
前記骨格パラメータに基づいて、体型パラメータにより変形可能な3次元モデルの姿勢を変化させる姿勢変化手段と、
前記深度情報に基づいて、前記被写体の表面形状を取得する表面形状取得手段と、
前記姿勢変化手段により姿勢が変化した3次元モデルの表面形状と、前記被写体の表面形状と、の差に基づいて、前記体型パラメータを決定する体型決定手段と、
前記体型パラメータ決定手段により決定された体型パラメータに基づいて、前記3次元モデルを変形させる体型変形手段と、
を含むことを特徴とする3次元モデル生成システム。 - 前記3次元モデル生成システムは、前記3次元モデルの表面形状と、前記被写体の表面形状と、の差に基づいて、前記骨格パラメータを決定する骨格決定手段を更に含み、
前記姿勢変化手段は、前記骨格決定手段により決定された骨格パラメータに基づいて、前記3次元モデルの姿勢を更新する、
ことを特徴とする請求項1に記載の3次元モデル生成システム。 - 前記3次元モデル生成システムは、前記体型変形手段により変形した3次元モデルの表面形状の一部と、当該部分に対応する前記被写体の表面形状と、の差に基づいて、当該部分を変形させる部分変形手段、
を更に含むことを特徴とする請求項1又は2に記載の3次元モデル生成システム。 - 前記3次元モデル生成システムは、前記深度情報に基づいて、前記3次元モデルの中から、前記深度検出装置の検出範囲に対応する部分を特定する検出範囲特定手段を更に含み、
前記部分変形手段は、前記3次元モデルのうち、前記検出範囲特定手段により特定された部分を変形させる、
ことを特徴とする請求項3に記載の3次元モデル生成システム。 - 前記部分変形手段は、前記3次元モデルの各部分を、周囲の変形具合に基づいて変形させる、
ことを特徴とする請求項3又は4に記載の3次元モデル生成システム。 - 前記部分変形手段は、前記3次元モデルの各部分を、当該部分の過去の形状に基づいて変形させる、
ことを特徴とする請求項3~5の何れかに記載の3次元モデル生成システム。 - 前記3次元モデル生成システムは、前記3次元モデル又は前記被写体の表面形状に基づいて、前記3次元モデルの中から、前記被写体の身体に対応する身体部分を特定する身体部分特定手段を更に含み、
前記体型決定手段は、前記体型パラメータを決定する場合に、前記身体部分に対応する差の重み付けを、他の部分の差の重み付けよりも高くする、
ことを特徴とする請求項1~6の何れかに記載の3次元モデル生成システム。 - 前記3次元モデル生成システムは、前記深度情報に基づいて、前記3次元モデルの中から、前記深度検出装置の検出範囲に対応する頂点を特定する検出範囲特定手段を更に含み、
前記体型決定手段は、前記検出範囲における前記3次元モデルの表面形状と、前記被写体の表面形状と、の差に基づいて、前記体型パラメータを決定する、
ことを特徴とする請求項1~7の何れかに記載の3次元モデル生成システム。 - 前記3次元モデル生成システムは、前記体型パラメータの履歴を記憶手段に記録する履歴記録手段を更に含み、
前記体型決定手段は、前記履歴記録手段により記録された履歴に基づいて、前記体型パラメータを決定する、
ことを特徴とする請求項1~8の何れかに記載の3次元モデル生成システム。 - 深度検出装置が検出した被写体の深度情報に基づいて、骨格パラメータを取得する骨格取得ステップと、
前記骨格パラメータに基づいて、体型パラメータにより変形可能な3次元モデルの姿勢を変化させる姿勢変化ステップと、
前記深度情報に基づいて、前記被写体の表面形状を取得する表面形状ステップと、
前記姿勢変化ステップにより姿勢が変化した3次元モデルの表面形状と、前記被写体の表面形状と、の差に基づいて、前記体型パラメータを決定する体型決定ステップと、
前記体型パラメータ決定ステップにより決定された体型パラメータに基づいて、前記3次元モデルを変形させる体型変形ステップと、
を含むことを特徴とする3次元モデル生成方法。 - 深度検出装置が検出した被写体の深度情報に基づいて、骨格パラメータを取得する骨格取得手段、
前記骨格パラメータに基づいて、体型パラメータにより変形可能な3次元モデルの姿勢を変化させる姿勢変化手段、
前記深度情報に基づいて、前記被写体の表面形状を取得する表面形状取得手段、
前記姿勢変化手段により姿勢が変化した3次元モデルの表面形状と、前記被写体の表面形状と、の差に基づいて、前記体型パラメータを決定する体型決定手段、
前記体型パラメータ決定手段により決定された体型パラメータに基づいて、前記3次元モデルを変形させる体型変形手段、
としてコンピュータを機能させるためのプログラム。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017500096A JP6116784B1 (ja) | 2016-02-16 | 2016-02-16 | 3次元モデル生成システム、3次元モデル生成方法、及びプログラム |
PCT/JP2016/054404 WO2017141344A1 (ja) | 2016-02-16 | 2016-02-16 | 3次元モデル生成システム、3次元モデル生成方法、及びプログラム |
US16/066,674 US10593106B2 (en) | 2016-02-16 | 2016-02-16 | Three-dimensional model generating system, three-dimensional model generating method, and program |
CN201680076775.4A CN108475439B (zh) | 2016-02-16 | 2016-02-16 | 三维模型生成***、三维模型生成方法和记录介质 |
EP16890486.0A EP3382648A4 (en) | 2016-02-16 | 2016-02-16 | SYSTEM FOR GENERATING A THREE-DIMENSIONAL MODEL, METHOD FOR GENERATING A THREE-DIMENSIONAL MODEL AND PROGRAM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/054404 WO2017141344A1 (ja) | 2016-02-16 | 2016-02-16 | 3次元モデル生成システム、3次元モデル生成方法、及びプログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017141344A1 true WO2017141344A1 (ja) | 2017-08-24 |
Family
ID=58666873
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/054404 WO2017141344A1 (ja) | 2016-02-16 | 2016-02-16 | 3次元モデル生成システム、3次元モデル生成方法、及びプログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US10593106B2 (ja) |
EP (1) | EP3382648A4 (ja) |
JP (1) | JP6116784B1 (ja) |
CN (1) | CN108475439B (ja) |
WO (1) | WO2017141344A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107633552A (zh) * | 2017-09-01 | 2018-01-26 | 上海视智电子科技有限公司 | 基于体感交互的创建三维几何模型的方法及*** |
US20180225858A1 (en) * | 2017-02-03 | 2018-08-09 | Sony Corporation | Apparatus and method to generate realistic rigged three dimensional (3d) model animation for view-point transform |
WO2019044333A1 (ja) * | 2017-08-31 | 2019-03-07 | らしさ・ドット・コム株式会社 | シミュレーション装置、シミュレーション方法、及びコンピュータプログラム |
JPWO2020178957A1 (ja) * | 2019-03-04 | 2021-10-21 | 日本電気株式会社 | 画像処理装置、画像処理方法及びプログラム |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019015553A (ja) * | 2017-07-05 | 2019-01-31 | ソニーセミコンダクタソリューションズ株式会社 | 情報処理装置、情報処理方法および個体撮像装置 |
EP3693923A4 (en) | 2017-10-03 | 2020-12-16 | Fujitsu Limited | RECOGNITION PROGRAM, RECOGNITION PROCESS AND RECOGNITION DEVICE |
CN111801708B (zh) | 2017-12-22 | 2022-04-29 | 奇跃公司 | 使用光线投射和实时深度进行遮挡渲染的方法 |
CN109144252B (zh) * | 2018-08-01 | 2021-04-27 | 百度在线网络技术(北京)有限公司 | 对象确定方法、装置、设备和存储介质 |
CN110909581B (zh) * | 2018-09-18 | 2023-04-14 | 北京市商汤科技开发有限公司 | 数据处理方法及装置、电子设备及存储介质 |
EP3953857A4 (en) * | 2019-04-12 | 2022-11-16 | INTEL Corporation | TECHNOLOGY TO AUTOMATICALLY IDENTIFY THE FRONT BODY ORIENTATION OF INDIVIDUALS IN REAL-TIME MULTICAMERA VIDEO STREAMS |
WO2021005708A1 (ja) * | 2019-07-09 | 2021-01-14 | 株式会社ソニー・インタラクティブエンタテインメント | スケルトンモデル更新装置、スケルトンモデル更新方法及びプログラム |
JP2022512262A (ja) * | 2019-11-21 | 2022-02-03 | ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド | 画像処理方法及び装置、画像処理機器並びに記憶媒体 |
CN110930298A (zh) * | 2019-11-29 | 2020-03-27 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、图像处理设备及存储介质 |
CN111105348A (zh) * | 2019-12-25 | 2020-05-05 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、图像处理设备及存储介质 |
CN111260764B (zh) * | 2020-02-04 | 2021-06-25 | 腾讯科技(深圳)有限公司 | 一种制作动画的方法、装置及存储介质 |
CN112288890A (zh) * | 2020-11-20 | 2021-01-29 | 深圳羽迹科技有限公司 | 一种模型的编辑方法及*** |
CN114372377B (zh) * | 2022-03-21 | 2023-08-01 | 江西珉轩智能科技有限公司 | 一种基于3d时空引擎的工程信息模型构建方法 |
CN115171198B (zh) * | 2022-09-02 | 2022-11-25 | 腾讯科技(深圳)有限公司 | 模型质量评估方法、装置、设备及存储介质 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015079362A (ja) * | 2013-10-17 | 2015-04-23 | セーレン株式会社 | 試着支援装置及び方法 |
JP2015531098A (ja) * | 2012-06-21 | 2015-10-29 | マイクロソフト コーポレーション | デプスカメラを使用するアバター構築 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5065470B2 (ja) * | 2010-12-07 | 2012-10-31 | 楽天株式会社 | サーバ、情報管理方法、情報管理プログラム、及びそのプログラムを記録するコンピュータ読み取り可能な記録媒体 |
CN103197757A (zh) * | 2012-01-09 | 2013-07-10 | 癸水动力(北京)网络科技有限公司 | 一种沉浸式虚拟现实***及其实现方法 |
TWI521469B (zh) * | 2012-06-27 | 2016-02-11 | Reallusion Inc | Two - dimensional Roles Representation of Three - dimensional Action System and Method |
US9536338B2 (en) * | 2012-07-31 | 2017-01-03 | Microsoft Technology Licensing, Llc | Animating objects using the human body |
US9524582B2 (en) | 2014-01-28 | 2016-12-20 | Siemens Healthcare Gmbh | Method and system for constructing personalized avatars using a parameterized deformable mesh |
CN104794722A (zh) * | 2015-04-30 | 2015-07-22 | 浙江大学 | 利用单个Kinect计算着装人体三维净体模型的方法 |
-
2016
- 2016-02-16 EP EP16890486.0A patent/EP3382648A4/en active Pending
- 2016-02-16 WO PCT/JP2016/054404 patent/WO2017141344A1/ja active Application Filing
- 2016-02-16 CN CN201680076775.4A patent/CN108475439B/zh active Active
- 2016-02-16 US US16/066,674 patent/US10593106B2/en active Active
- 2016-02-16 JP JP2017500096A patent/JP6116784B1/ja active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015531098A (ja) * | 2012-06-21 | 2015-10-29 | マイクロソフト コーポレーション | デプスカメラを使用するアバター構築 |
JP2015079362A (ja) * | 2013-10-17 | 2015-04-23 | セーレン株式会社 | 試着支援装置及び方法 |
Non-Patent Citations (4)
Title |
---|
KAORU SUGITA ET AL.: "Jinbutsu Silhouette eno Jikukan Ichi Awase ni yoru Ifuku Gazo no Gosei", THE TRANSACTIONS OF THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS D, vol. J97-D, no. 9, 9 January 2014 (2014-01-09), pages 1519 - 1528, XP009511542, ISSN: 1880-4535 * |
KIMIO HIRAO: "Pose Estimation of Human Upper Body Using Multi-joint CG Model and Depth Images", IEICE TECHNICAL REPORT, PATTERN RECOGNITION AND MEDIA UNDERSTANDING, vol. 104, no. 573, 14 January 2005 (2005-01-14), pages 79 - 84, XP055494743, ISSN: 0913-5685 * |
MASAHIRO SEKINE ET AL.: "Clothes Image Synthesis Technologies for Virtual Fitting System Using Body Shape Estimation", TOSHIBA REVIEW, vol. 70, no. 5, May 2015 (2015-05-01), pages 42 - 45, XP009511539, ISSN: 0372-0462 * |
See also references of EP3382648A4 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180225858A1 (en) * | 2017-02-03 | 2018-08-09 | Sony Corporation | Apparatus and method to generate realistic rigged three dimensional (3d) model animation for view-point transform |
WO2019044333A1 (ja) * | 2017-08-31 | 2019-03-07 | らしさ・ドット・コム株式会社 | シミュレーション装置、シミュレーション方法、及びコンピュータプログラム |
CN107633552A (zh) * | 2017-09-01 | 2018-01-26 | 上海视智电子科技有限公司 | 基于体感交互的创建三维几何模型的方法及*** |
JPWO2020178957A1 (ja) * | 2019-03-04 | 2021-10-21 | 日本電気株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP7294402B2 (ja) | 2019-03-04 | 2023-06-20 | 日本電気株式会社 | 画像処理装置、画像処理方法及びプログラム |
US11803615B2 (en) | 2019-03-04 | 2023-10-31 | Nec Corporation | Generating 3D training data from 2D images |
Also Published As
Publication number | Publication date |
---|---|
US10593106B2 (en) | 2020-03-17 |
CN108475439B (zh) | 2022-06-17 |
US20190156564A1 (en) | 2019-05-23 |
JPWO2017141344A1 (ja) | 2018-02-22 |
CN108475439A (zh) | 2018-08-31 |
JP6116784B1 (ja) | 2017-04-19 |
EP3382648A1 (en) | 2018-10-03 |
EP3382648A4 (en) | 2019-05-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6116784B1 (ja) | 3次元モデル生成システム、3次元モデル生成方法、及びプログラム | |
Zheng et al. | Hybridfusion: Real-time performance capture using a single depth sensor and sparse imus | |
JP5833189B2 (ja) | 被写体の三次元表現を生成する方法およびシステム | |
JP6560480B2 (ja) | 画像処理システム、画像処理方法、及びプログラム | |
JP5931215B2 (ja) | 姿勢を推定する方法及び装置 | |
WO2020054442A1 (ja) | 関節位置の取得方法及び装置、動作の取得方法及び装置 | |
EP2893479B1 (en) | System and method for deriving accurate body size measures from a sequence of 2d images | |
US11948376B2 (en) | Method, system, and device of generating a reduced-size volumetric dataset | |
CN104508709A (zh) | 使用人体对对象进行动画化 | |
JP2014522058A (ja) | 三次元オブジェクトのモデリング、フィッティング、およびトラッキング | |
JP2012083955A (ja) | 動作モデル学習装置、3次元姿勢推定装置、動作モデル学習方法、3次元姿勢推定方法およびプログラム | |
JP2021105887A (ja) | 3dポーズ取得方法及び装置 | |
US11908151B2 (en) | System and method for mobile 3D scanning and measurement | |
Michel et al. | Markerless 3d human pose estimation and tracking based on rgbd cameras: an experimental evaluation | |
KR102436906B1 (ko) | 대상자의 보행 패턴을 식별하는 방법 및 이를 수행하는 전자 장치 | |
US20130069939A1 (en) | Character image processing apparatus and method for footskate cleanup in real time animation | |
JP2014085933A (ja) | 3次元姿勢推定装置、3次元姿勢推定方法、及びプログラム | |
Jatesiktat et al. | Personalized markerless upper-body tracking with a depth camera and wrist-worn inertial measurement units | |
US20240013415A1 (en) | Methods and systems for representing a user | |
CN116266408A (zh) | 体型估计方法、装置、存储介质及电子设备 | |
Payandeh et al. | Experimental Study of a Deep‐Learning RGB‐D Tracker for Virtual Remote Human Model Reconstruction | |
KR20240035292A (ko) | 카메라 캘리브레이션(camera calibration)을 수행하는 전자 장치 및 그 동작 방법 | |
EP4360050A1 (en) | Method and system for obtaining human body size information from image data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2017500096 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16890486 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2016890486 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2016890486 Country of ref document: EP Effective date: 20180627 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |