CN116266408A - Body type estimating method, body type estimating device, storage medium and electronic equipment - Google Patents

Body type estimating method, body type estimating device, storage medium and electronic equipment Download PDF

Info

Publication number
CN116266408A
CN116266408A CN202111529770.1A CN202111529770A CN116266408A CN 116266408 A CN116266408 A CN 116266408A CN 202111529770 A CN202111529770 A CN 202111529770A CN 116266408 A CN116266408 A CN 116266408A
Authority
CN
China
Prior art keywords
human body
information
target
model
estimation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111529770.1A
Other languages
Chinese (zh)
Inventor
曾凡涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111529770.1A priority Critical patent/CN116266408A/en
Publication of CN116266408A publication Critical patent/CN116266408A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

A body type estimating method, device, storage medium and electronic apparatus. The method comprises the steps of obtaining an object image of an object human body, and carrying out posture estimation on the object image through a posture estimation model to obtain human body posture estimation information and body type estimation information of the object human body and equipment posture estimation information of image acquisition equipment of the object image; performing skeleton point detection on the object image to obtain skeleton point information of the object human body; detecting the human body contour of the object image to obtain human body contour information of the object human body; and optimizing the body type estimation information according to the human body posture estimation information and the equipment posture estimation information by taking the skeleton point information and the human body contour information as constraints to obtain target body type information of the human body of the object. The method and the device can improve the efficiency of obtaining the human body type information.

Description

Body type estimating method, body type estimating device, storage medium and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a body type estimating method, a device, a storage medium, and an electronic apparatus.
Background
The body type information is used as an important index of the human body in all aspects of life. For example, a user needs to purchase an appropriate item of clothing with reference to his/her body type information, monitor his/her health condition based on his/her body type information, and the like. In the related art, the human body type information of the object is usually measured manually, and the efficiency is low.
Disclosure of Invention
The application provides a body type estimating method, a body type estimating device, a storage medium and electronic equipment, which can improve the efficiency of obtaining body type information of a human body.
The body type estimation method provided by the application comprises the following steps:
acquiring an object image of an object human body, and carrying out posture estimation on the object image through a posture estimation model to obtain human body posture estimation information and body type estimation information of the object human body and equipment posture estimation information of image acquisition equipment of the object image;
performing skeleton point detection on the object image to obtain skeleton point information of the object human body;
detecting the human body contour of the object image to obtain human body contour information of the object human body;
and optimizing the body type estimation information according to the human body posture estimation information and the equipment posture estimation information by taking the skeleton point information and the human body contour information as constraints to obtain target body type information of the human body of the object.
The body type estimating device provided by the application comprises:
the estimating module is used for acquiring an object image of the object human body, and estimating the posture of the object image through the posture estimating model to obtain human body posture estimating information and body type estimating information of the object human body and equipment posture estimating information of an image acquisition equipment of the object image;
The first detection module is used for detecting skeleton points of the object image to obtain skeleton point information of the object human body;
the second detection module is used for detecting the human body contour of the object image to obtain the human body contour information of the object human body;
the optimization module is used for optimizing the body type estimation information according to the body posture estimation information and the equipment posture estimation information by taking the skeleton point information and the human body contour information as constraints to obtain target body type information of a target human body.
The storage medium provided herein has a computer program stored thereon that, when loaded by a processor, performs steps in a body shape estimation method as provided herein.
The electronic device provided by the application comprises a processor and a memory, wherein the memory stores a computer program, and the processor is used for executing steps in the body shape estimation method provided by the application by loading the computer program.
In the method, an object image of an object human body is subjected to posture estimation by using a posture estimation model to obtain human body posture estimation information and body type estimation information of the object human body in a three-dimensional space and equipment posture estimation information of image acquisition equipment of the object image in the three-dimensional space, in addition, skeleton point information and human body contour information of the object human body in the two-dimensional space are extracted from the object image, the skeleton point information and the human body contour information are used as constraints, and body type estimation information is optimized according to the human body posture estimation information and the equipment posture estimation information to obtain target body type information of the object human body. Compared with the related art, the method and the device have the advantages that the traditional manual measurement is replaced by adopting the posture estimation mode based on artificial intelligence, so that manual operation can be reduced, the efficiency of obtaining the body type information is improved, in addition, the body type estimation information is optimized by taking the bone point information and the body contour information of the human body of the object as constraints, and accordingly the target body type information matched with the bone point information and the body contour information of the human body of the object is obtained, and the body type of the human body of the object can be reflected more accurately.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of a body type estimation system according to an embodiment of the present application.
Fig. 2 is a flow chart of a body type estimation method according to an embodiment of the present application.
Fig. 3 is human body contour information in the form of a mask image in an embodiment of the present application.
Fig. 4 is an exemplary diagram of capturing a human body sub-image from an object image according to an embodiment of the present application.
Fig. 5 is an exemplary diagram of a garment selection interface provided in an embodiment of the present application.
Fig. 6 is a schematic diagram of obtaining deformation vectors of each triangular face piece of the clothes in the embodiment of the present application.
Fig. 7 is another schematic view of obtaining deformation vectors of each of the triangular patches of clothing in the embodiment of the present application.
Fig. 8 is a block diagram of the configuration of the body type estimating apparatus provided in the embodiment of the present application.
Fig. 9 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
It should be noted that the principles of the present application are illustrated as implemented in a suitable computing environment. The following description is based on illustrated embodiments of the present application and should not be taken as limiting other embodiments not described in detail herein. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Relational terms such as first and second, and the like may be used solely to distinguish one object or operation from another object or operation without necessarily limiting the actual sequential relationship between the objects or operations. In the description of the embodiments of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Artificial intelligence (Artificial Intelligence, AI) is a theory, method, technique, and application system that utilizes a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. Artificial intelligence software technology mainly includes Machine Learning (ML) technology, wherein Deep Learning (DL) is a new research direction in Machine Learning, which is introduced into Machine Learning to make it closer to an original target, i.e., artificial intelligence. At present, deep learning is mainly applied to the fields of computer vision, natural language processing and the like.
Deep learning is the inherent regularity and presentation hierarchy of learning sample data, and information obtained during such learning processes greatly aids in interpretation of data such as text, image and sound. The deep learning technology and the corresponding training data set are utilized to train and obtain network models realizing different functions, for example, a deep learning network for gender classification can be trained based on one training data set, a deep learning network for image optimization can be trained based on another training data set, and the like.
In order to improve the efficiency of obtaining body type information, the present application introduces deep learning into pronunciation detection, and accordingly provides a body type estimating method, a body type estimating device, a storage medium, and an electronic apparatus. Wherein the body type estimation method may be performed by the electronic device.
Referring to fig. 1, the present application further provides a body type estimating system, as shown in fig. 1, where the body type estimating system includes an electronic device 100, for example, when the electronic device 100 is configured with an image capturing device such as a camera, a subject human body may be captured by the camera (if a plurality of cameras are configured, capturing may be performed by one of the cameras), a subject image of the subject human body is obtained, and the subject image is input into a trained posture estimating model, and posture estimation is performed on the subject image by the posture estimating model, so as to obtain an estimation result including human body posture estimating information, body type estimating information, and device posture estimating information of the image capturing device of the subject human body; in addition, bone point detection and human body contour detection are respectively carried out on the object image, so that bone point information and human body contour information of the object human body are obtained; and then, taking the skeleton point information and the human body contour information as constraints, and carrying out optimization processing on the body type estimation information according to the human body posture estimation information and the equipment posture estimation information to obtain target body type information of the target human body.
The electronic device 100 may be any device with a processor and having a processing capability, such as a mobile electronic device with a processor, such as a smart phone, a tablet computer, a palm computer, a notebook computer, or a stationary electronic device with a processor, such as a desktop computer, a television, or a server.
In addition, as shown in fig. 1, the body type estimation system may further include a storage device 200 for storing data including, but not limited to, raw data, intermediate data, result data, etc. obtained in the body type estimation process, for example, the electronic device 100 may store the acquired object image, the human body posture estimation information, the body type estimation information, the device posture estimation information, the bone point information and the human body contour information extracted from the object image, and the finally optimized body type estimation information in the storage device 200.
It should be noted that, the schematic view of the body type estimation system shown in fig. 1 is only an example, and the body type estimation system and the scene described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided in the embodiments of the present application, and those skilled in the art can know that, with the evolution of the body type estimation system and the appearance of a new service scenario, the technical solutions provided in the embodiments of the present application are equally applicable to similar technical problems.
Referring to fig. 2, fig. 2 is a flow chart of a body type estimating method according to an embodiment of the present application. As shown in fig. 2, the flow of the body type estimation method provided in the embodiment of the present application may be as follows:
in S310, an object image of the object human body is acquired, and the object image is subjected to pose estimation by the pose estimation model, so as to obtain human body pose estimation information, body shape estimation information of the object human body, and device pose estimation information of an image acquisition device of the object image.
It should be noted that the subject human body may be any human body that needs to be subjected to body type estimation, for example, if the body type estimation is required to be performed on the user a, the human body of the user a is the subject human body. Body shape estimation is understood to mean estimating body shape information of a subject's body in a non-practical manner, including but not limited to height, chest circumference, hip circumference, waist circumference, arm width, leg length, etc.
Accordingly, in this embodiment, the electronic device first obtains an object image of the object human body that needs to be subjected to body shape estimation. There is no particular limitation on how the electronic device obtains the object image of the object human body, and it can be configured by those skilled in the art according to actual needs.
For example, when the electronic device is configured with an image acquisition device such as a camera, the object image of the object human body can be obtained by shooting the object human body with the configured camera;
for another example, the electronic device may further acquire, from other electronic devices, an object image of the object human body captured by the other electronic devices;
for another example, the electronic device may also download the object image of the object body from the network.
It should be noted that the present application is pre-trained with a posture estimation model configured to input an object image including a human body, perform posture estimation of the human body in accordance with the object image, and output human body posture information, body shape information, and device posture information of an image acquisition device of the object image. The structure and training mode of the posture estimation model are not particularly limited, and can be selected by a person skilled in the art according to actual needs.
For example, the pose estimation model may be composed of three major parts, a feature encoding network, a feature decoding network, and an estimation network, respectively, wherein the estimation network includes a first estimation sub-network, a second estimation sub-network, and a third estimation sub-network.
The feature coding network is configured to perform feature coding on an input object image to obtain coding features of the object image;
the feature decoding network is configured to perform feature decoding on the coding features output by the feature coding network to obtain decoding features;
the first estimation sub-network is configured to obtain human body posture information according to the decoding characteristic estimation;
the second estimation sub-network is configured to estimate body type information according to the decoding characteristics;
the third estimation sub-network is configured to derive device pose information from the decoded feature estimates.
Accordingly, based on the above posture estimation model, the electronic device inputs the acquired object image into the posture estimation model after acquiring the object image of the object human body, performs posture estimation on the input object image through the posture estimation model, marks the human body posture information output by the posture estimation model as human body posture estimation information, marks the body type information output by the posture estimation model as body type estimation information, and marks the device posture information output by the posture estimation model as device posture estimation information.
The human body posture estimation information at least describes translation information and rotation information of skeleton points of a human body of a subject in a three-dimensional space, and the equipment posture estimation information at least describes translation information and rotation information of an image acquisition equipment of an image of the subject in the three-dimensional space.
In S320, the bone-point detection is performed on the object image to obtain bone-point information of the object human body.
In this embodiment, in addition to performing pose estimation in three-dimensional space on the object image, the electronic device performs skeletal point detection on the object image in two-dimensional space according to the configured skeletal point detection algorithm, so as to obtain skeletal point information of the object human body. Wherein the bone point information is used to describe the position of bone points of the subject's body (including but not limited to head, neck, shoulder, hand, hip, knee, foot, etc.) in two dimensions of the subject's image. The algorithm for detecting the bone points is not particularly limited, and may be configured by those skilled in the art according to actual needs.
In S330, the human body contour detection is performed on the object image to obtain human body contour information of the object human body.
In this embodiment, in addition to performing pose estimation in three-dimensional space on the object image, the electronic device performs human body contour detection on the object image in two-dimensional space according to the configured human body contour detection algorithm, so as to obtain human body contour information of the object human body. The human body outline is used for describing the outline of the human body of the object in the object image. The human body contour detection algorithm is not particularly limited here, and may be configured by those skilled in the art according to actual needs.
For example, the electronic device performs human body contour detection on the object image according to the configured human body contour detection algorithm, and outputs human body contour information in the form of a mask image, wherein a pixel point with a pixel value of 0 may be used for representing a human body, a pixel value with a pixel value of 255 may be used for representing a non-human body, a pixel value with a pixel value of 255 may be used for representing a human body, and a pixel point with a pixel value of 0 may be used for representing a non-human body. For example, referring to fig. 3, a mask image is shown, which is clear in black and white and is only composed of two pixel values of 0 and 255, wherein the pixel points with the pixel value of 0 form a black area to represent a human body, and the pixel points with the pixel value of 255 form a white area to represent a non-human body area, so that the human body contour is described as a whole.
It should be noted that the above posture estimation, skeleton point detection, and human body contour detection of the object image are not limited by the sequence number, and may be performed sequentially according to the sequence number, may be performed sequentially according to other sequences, may be performed in parallel, and the like.
For example, the electronic device may run three threads, through which pose estimation, skeletal point detection, and human body contour detection are performed on the object image in parallel.
In S340, the body type estimation information is optimized according to the body posture estimation information and the device posture estimation information with the skeletal point information and the body contour information as constraints, to obtain target body type information of the subject body.
As described above, after the human body posture estimation information and the body shape estimation information of the subject human body in the three-dimensional space are obtained by using the posture estimation model estimation, the device posture estimation information of the image acquisition device of the subject image in the three-dimensional space, and the bone point information and the human body contour information of the subject human body in the two-dimensional space are detected from the subject image, the body shape estimation information is iteratively optimized according to the configured nonlinear optimization strategy by taking the bone point information and the human body contour information as constraints, and optimized body shape information matched with the bone point information and the human body contour information is obtained and recorded as target body shape information. The nonlinear optimization strategy to be adopted is not particularly limited, and can be configured by those skilled in the art according to actual needs.
From the above, the present application uses the posture estimation model to perform posture estimation on the object image of the object human body to obtain the human body posture estimation information and the body type estimation information of the object human body in the three-dimensional space, and the device posture estimation information of the image acquisition device of the object image in the three-dimensional space. Compared with the related art, the method and the device have the advantages that the traditional manual measurement is replaced by adopting the posture estimation mode based on artificial intelligence, so that manual operation can be reduced, the efficiency of obtaining the body type information is improved, in addition, the body type estimation information is optimized by taking the bone point information and the body contour information of the human body of the object as constraints, and accordingly the target body type information matched with the bone point information and the body contour information of the human body of the object is obtained, and the body type of the human body of the object can be reflected more accurately.
In an alternative embodiment, to further improve accuracy of body type estimation, the acquired object image includes object images of a plurality of preset angles of a subject human body, and the body type estimation information is optimized according to the body posture estimation information and the device posture estimation information with skeletal point information and human body contour information as constraints, to obtain target body type information of the subject human body, including:
fusing body type estimation information corresponding to a plurality of object images with preset angles to obtain initial body type information of an object human body;
and taking skeleton point information and human body contour information corresponding to the object images with the plurality of preset angles as constraints, and carrying out optimization processing on the initial body type information according to human body posture estimation information and equipment posture estimation information corresponding to the object images with the plurality of preset angles to obtain target body type information of the object human body.
In this embodiment, the number and the value of the preset angles are not particularly limited, and may be configured by those skilled in the art according to actual needs.
For example, when the electronic device is provided with an image acquisition device such as a camera, a subject human body can be photographed by the camera according to a plurality of different preset angles, thereby obtaining a plurality of preset angle subject images of the subject human body.
The method comprises the steps that object images of a plurality of preset angles of an object human body can be obtained through shooting in a plurality of modes, the object human body can stand still, and different shooting angles are obtained through a mode of mobile electronic equipment; the electronic equipment can be fixed, and different shooting angles can be obtained by the mode that the human body of the object rotates in situ. The following description will take a plurality of preset angles as front face angles, left face angles, right face angles, and back face angles as examples.
For example, when shooting a target human body, the shot target human body stands still, and the other hand holds the electronic equipment to shoot the target human body on the front, the back, the left and the right of the target human body respectively, so as to obtain the target images with the four different preset angles.
For another example, when shooting the object human body, the electronic device can be fixed by the object human body, the timing shooting time is set, and the shooting number is 4; after the foregoing setting is completed, the subject human body may face the electronic device first, and when the electronic device completes the first photographing (when the electronic device will photograph the subject image on the front of the subject human body), the subject human body rotates clockwise by 90 degrees in place (i.e., the body faces the electronic device to the left), when the electronic device completes the second photographing (when the electronic device will photograph the subject image on the left of the subject human body), the subject human body rotates clockwise by 90 degrees in place again (i.e., the body faces the electronic device), when the electronic device completes the third photographing (when the electronic device will photograph the subject image on the back of the subject human body), the subject human body rotates by 90 degrees again in place (i.e., the body faces the electronic device to the right), and when the electronic device completes the fourth photographing, the subject human body rotates by 90 degrees again (i.e., the body faces the electronic device to the right), thereby the electronic device will photograph the subject image on the front, back, left, and right of the subject human body of four different preset angles in common.
According to the posture estimation mode, the bone point detection mode and the human body contour detection mode described in the above embodiments, the electronic device performs posture estimation, bone point detection and human body contour detection on the object image of each preset angle, and correspondingly obtains human body posture estimation information and device posture estimation information of the object image of each preset angle, and bone point information and human body contour information of the object image of each preset angle.
In this embodiment, in optimizing body type information, instead of optimizing body type information of each preset angle, body type estimation information corresponding to object images of a plurality of preset angles is fused according to a configured fusion strategy, the fused body type information is recorded as initial body type information, then bone point information and body contour information corresponding to object images of a plurality of preset angles are used as constraints, and according to a configured nonlinear optimization strategy, iterative optimization is performed on the initial body type information according to body posture estimation information and equipment posture estimation information corresponding to object images of a plurality of preset angles, so as to obtain target body type information which is matched with bone point information and body contour information of a subject body at a plurality of different preset angles.
It should be noted that, in this embodiment, the configuration of the fusion policy is not particularly limited, and may be configured by those skilled in the art according to actual needs.
For example, the fusion policy may be configured to: taking the average value of the body type estimation information corresponding to the object images of all preset angles;
for another example, the fusion policy may be configured to: and weighting and summing the body type estimation information corresponding to the object images of the plurality of preset angles according to the pre-allocation weight of each preset angle.
In an optional embodiment, to further improve accuracy and efficiency of body type estimation, before performing posture estimation on the object image through the posture estimation model to obtain human body posture estimation information and body type estimation information of the human body of the object and device posture estimation information of the image acquisition device of the object image, the method further includes:
detecting the human body region of the object image to obtain a human body boundary frame of the object image;
according to the human body boundary box, intercepting a human body sub-image in the object image;
performing pose estimation on the object image through a pose estimation model to obtain human body pose estimation information and body shape estimation information of a human body of the object and equipment pose estimation information of an image acquisition equipment of the object image, wherein the method comprises the following steps:
And carrying out posture estimation on the human body sub-images through the posture estimation model to obtain human body posture estimation information and body type estimation information of the human body of the object and equipment posture estimation information of the image acquisition equipment of the image of the object.
In this embodiment, the object image is not input into the pose estimation model entirely, but part of the image content related to the human body is input into the pose estimation model to perform pose estimation.
The electronic device firstly detects the human body region of the object image according to the configured human body region detection strategy to obtain a human body boundary frame of the object image, namely a minimum circumscribed rectangular frame of the human body in the object image. The configuration of the human body region detection policy is not particularly limited herein, and may be configured by those skilled in the art according to actual needs.
For example, in this embodiment, a human body detection model is also trained in advance, and the human body detection model is configured to take an image including a human body as an input, and a human body bounding box as an output, and an area within the human body bounding box is a human body area. The architecture and training mode of the human body detection model are not particularly limited, and can be selected by those skilled in the art according to actual needs.
Accordingly, when the human body region detection is performed on the object image, the electronic device may input the object image into the human body detection model, and perform the human body region detection on the object image through the human body detection model, so as to obtain the human body boundary box. Then, the electronic device further intercepts a human body sub-image from the object image according to the human body boundary box, inputs the human body sub-image into a posture estimation model for posture estimation, and obtains human body posture estimation information and body type estimation information of the human body of the object and equipment posture estimation information of an image acquisition device of the object image.
For example, referring to fig. 4, in the object image, there are other objects in addition to the human body. The electronic equipment inputs the object image into a human body detection model to detect human body areas, a human body boundary box corresponding to the object image is obtained, and then a human body sub-image is cut out from the object image according to the human body boundary box.
It will be appreciated that there are often situations where it is inconvenient to try on clothes when purchasing them, making it difficult for a person to pick up a fitted piece of clothes. For example, when a physical shop purchases clothes, the clothes need to be queued for fitting, long waiting time is required for waiting for fitting, and the clothes need to be repeatedly put on and taken off during fitting, so that a great deal of time and energy are wasted, and the clothes which fit the body cannot be selected necessarily. For example, when purchasing clothes through a network, the user is faced with the circumstance that fitting cannot be performed, and the user can usually only imagine the effect of fitting the user through model pictures displayed by merchants, and it is often difficult to select the clothes which fit. Therefore, in order to meet fitting requirements of users, the embodiment of the application also utilizes the estimated target body type information to realize virtual fitting services. In this embodiment, with skeletal point information and human body contour information as constraints, body shape estimation information is optimized according to human body posture estimation information and equipment posture estimation information, and after target body shape information of a target human body is obtained, the method further includes:
Acquiring a target human body model matched with the body type of the target human body according to the target body type information;
and in response to the selection operation of the clothes model, fusing the target clothes model designated by the selection operation to the target human body model for display.
In this embodiment, after optimizing to obtain the target body type information of the target body, the electronic device further obtains a three-dimensional body model matched with the body type of the target body, and records the three-dimensional body model as the target body model. For example, the electronic device may acquire a target human model matching the body shape of the subject human body from a model library including a plurality of human models of different body types, may directly generate a target human model matching the body shape of the subject human body, or the like.
In addition, in the present embodiment, the electronic device is further provided with a fitting interface, and is also provided with a "start interface" for triggering the electronic device to display the fitting interface. The setting position, the presentation form, etc. of the start interface are not particularly limited, and may be set by those skilled in the art according to actual needs. For example, the starting interface may be set on a desktop of the electronic device, and the fitting control may be set on the desktop in the form of a "fitting" icon, and clicking the fitting icon may trigger the electronic device to display the fitting interface.
Wherein the fitting interface includes a garment selection interface configured to receive input of a selection operation for a garment (including, but not limited to, upper, lower, hat, and a set of upper and lower garments, etc.).
For example, referring to fig. 5, the clothing selection interface may be presented in the form of a sliding selection frame. For example, as shown in fig. 5, a selection operation may be input by clicking a clothing icon in a slide selection frame, which is used to represent different clothing items, respectively associated with corresponding three-dimensional clothing models, and a currently selectable clothing icon may be slid left/right in the selection frame to switch, thereby selecting the clothing item desired to be tried on.
Correspondingly, when receiving input selecting operation for the clothes model, the electronic equipment responds to the selecting operation to fuse the target clothes model appointed by the selecting operation to the target human model for displaying, so that the effect of dressing the target human model is achieved, and the purpose of virtual fitting is achieved.
In an alternative embodiment, to improve the efficiency of acquiring the target mannequin, acquiring the target mannequin matching the body type of the subject body according to the target body type information includes:
Acquiring a preset human body model of a standard body type;
and performing deformation processing on the preset human body model according to the target body type information to obtain a target human body model matched with the body type of the target human body.
The electronic equipment firstly acquires a preset human body model of a standard body type, and based on the preset human body model of the standard body type, carries out deformation processing on the preset human body model according to target body type information, so as to obtain a target human body model matched with the body type of a target human body.
For example, an SMPL (a-linked Multi-Person Linear) model may be used as the preset manikin of the standard body type in this embodiment. The SMPL model is a multi-linear human body three-dimensional model based on vertexes and adopting human skin, and human bodies with different body types and postures can be accurately represented by configuring the body type parameters and the posture parameters of the SMPL model. Accordingly, in this embodiment, the body type parameters of the SMPL model are configured according to the target body type information, so that a body type-adjusted SMPL model that matches the body type of the subject human body can be obtained, and the body type-adjusted SMPL model is used as the target human body model.
In an alternative embodiment, when the target clothing model corresponds to the standard body type, fusing the target clothing model designated by the selection operation to the target human body model for display, including:
obtaining deformation information between a target human body model and a preset human body model;
performing deformation treatment on the target clothes model according to the deformation information to obtain a deformed target clothes model;
and fusing the deformed target clothing model to a target human body model for display.
It will be appreciated that it takes a lot of time and cost of computing resources to construct the garment models of different body types in advance, and therefore, in this embodiment, the garment models are all constructed according to the standard body types. Correspondingly, the target clothing model in the embodiment also corresponds to the standard body type, and because deformation exists between the target human body model and the preset human body model of the standard body type, if the target clothing model corresponding to the standard body type is directly fused to the target human body model for display, the optimal display effect cannot be obtained, and the fitting experience is affected. Therefore, in this embodiment, the electronic device first obtains deformation information between the target mannequin and the preset mannequin, and performs deformation processing on the target clothing model by using the deformation information to obtain a deformed target clothing model, and then fuses the deformed target clothing model to the target mannequin for display.
By adopting the virtual fitting function, the electronic equipment can rapidly complete the deployment of the virtual fitting function with low cost as long as the electronic equipment has the monocular camera, and meanwhile, the virtual fitting function can be combined with the AR\VR application, the playability of the AR\VR application and the use scene of the virtual fitting are enhanced, and the virtual fitting function is combined with an online off-line clothing store, so that a user can more conveniently and rapidly find clothes meeting the self requirements through the virtual fitting function, and meanwhile, the virtual fitting function can be used as an entertaining tool to enable the user to experience fun of virtual clothes role playing and the like.
In an alternative embodiment, obtaining deformation information between the target mannequin and the preset mannequin includes:
obtaining deformation information of each group of corresponding human model units between the target human model and the preset human model;
performing deformation processing on the target clothes model according to the deformation information to obtain a deformed target clothes model, wherein the deformation processing comprises the following steps:
according to deformation information of each group of corresponding human body model units, deformation vectors of corresponding clothes model units of each group of corresponding human body model units in a target clothes model are obtained;
performing deformation processing on each clothing model unit according to the deformation vector of each clothing model unit to obtain each deformed clothing model unit;
And obtaining a deformed target clothes model according to each deformed clothes model unit.
It should be noted that, in this embodiment, the clothing model and the preset mannequin are both constructed by using model units (such as triangular patches) with the same shape, and accordingly, in this embodiment, each model unit is used as a deformation object to perform deformation processing.
The electronic device may acquire deformation information of each group of corresponding mannequin units between the target mannequin and the preset mannequin in a cross parameterization manner, where the deformation information is used to describe what deformation (including information such as a direction and a size of deformation) a mannequin unit in the preset mannequin changes into a corresponding mannequin unit in the target mannequin.
In this embodiment, according to deformation information of each group of corresponding mannequin units, the electronic device obtains deformation vectors (including deformation sizes and deformation directions) of corresponding clothes model units of each group of corresponding mannequin units in the target clothes model, so as to obtain deformation vectors of each clothes model unit in the target clothes model. And then, the electronic equipment carries out deformation processing on each clothing model unit according to the deformation vector of each clothing model unit to obtain each deformed clothing model unit. Correspondingly, according to each deformed clothing model unit, the deformed target clothing model can be obtained.
Referring to fig. 6 and 7, the following description is given by using the mannequin unit and the clothing model unit as triangular panels:
the vertex of a triangular face piece of clothes is marked as pg, the perpendicular intersection point of a bone point connecting line closest to the vertex pg is marked as pb (reflecting the shortest distance between the vertex pg and two bone point connecting lines closest), the intersection point of a line segment pbpg and a certain triangular face piece of a human body in a preset human body model is marked as pm, the deformation intersection point pm 'after deformation of the intersection point pm is calculated according to deformation information of a group of triangular face pieces of the human body corresponding to the triangular face piece of clothes, the perpendicular intersection point pm' on the bone point connecting line closest to a target human body model is marked as pb ', and the deformation vertex pg' corresponding to the target human body model can be obtained according to the distance between the original pg and pm and the combination of pb 'and pm'. To this end, vectors
Figure BDA0003410313340000141
Namely, the deformation vector of the triangular face piece of the clothes.
In an alternative embodiment, after fusing the target clothing model specified by the selection operation to the target mannequin for display, the method further includes:
acquiring a real-time object image of a human body of an object;
carrying out gesture estimation on the real-time object image through a gesture estimation model to obtain real-time human body gesture estimation information of the human body of the object and real-time equipment gesture estimation information of image acquisition equipment of the real-time object image;
If the real-time equipment posture estimation information is based on weak perspective projection, performing projection conversion on the real-time equipment posture estimation information to obtain full perspective real-time equipment posture estimation information based on full perspective projection;
determining a target display position of the target human body model according to the full-perspective real-time equipment posture estimation information, and determining a target display posture of the target human body model according to the real-time human body posture estimation information;
and according to the target display position and the target display gesture, the target human body model and the target clothes model are displayed in a fusion mode.
In order to further enrich the virtual fitting function, the present embodiment further provides a dynamic virtual fitting function.
The electronic device can acquire a real-time object image of the object human body. For example, when the electronic device is configured with an image acquisition device such as a camera, the real-time object image of the object human body is obtained by capturing the object human body in real time by the configured camera. At this time, the electronic device further inputs the acquired real-time object image into a posture estimation model, performs posture estimation on the real-time object image through the posture estimation model, and obtains human body posture information and body shape information output by the posture estimation model and device posture information of an image acquisition device of the real-time object image.
In this embodiment, in order to improve the model training efficiency, the pose estimation model is obtained based on weak perspective projection training. Correspondingly, the real-time equipment posture estimation information acquired by using the posture estimation model is also based on weak perspective projection, so that in order to correct and accurately reflect the real position relationship between the object human body and the image acquisition equipment, the electronic equipment further performs projection conversion on the acquired real-time equipment posture estimation information, converts the real-time equipment posture estimation information from weak perspective projection to full perspective projection, and records the converted real-time equipment posture estimation information as full perspective real-time equipment posture estimation information.
When the projection of the object is performed, the projection size is inversely related to the distance of the object, and the larger the distance of the object at different positions is, the smaller the projection is, so that the effect of near, far and small is presented. The weak perspective projection is simplified on the basis of the full perspective projection, and when the object is projected, the distances of the positions of the same object are replaced by average distances, so that the effect of near, far and small cannot be displayed.
Further, the electronic device determines a real-time relative position of the human body of the object and the image acquisition device according to the full-perspective real-time device posture estimation information, determines a display position corresponding to the real-time relative position as a target display position of the target human body model according to the configured corresponding relation between the relative position and the display position, and determines a target display posture of the target human body model according to the real-time human body posture estimation information, namely, determines the human body posture described by the real-time human body posture estimation information as the target display posture of the target human body model.
As described above, after determining the target display position and the target display posture of the target mannequin, the electronic device fuses and displays the target mannequin and the target clothing model according to the target display position and the target display posture.
By adopting the dynamic virtual fitting function provided by the embodiment, the fused target human body model and the target clothes model can move along with the human body of the object, and the corresponding gesture is made, so that the virtual fitting function is more realistic, and the virtual fitting function is just like the clothes is worn on the real human body.
In an alternative embodiment, before the fusion display of the target mannequin and the target clothing model according to the target display position and the target display pose, the method further includes:
smoothing the target display position and the target display gesture to obtain a smoothed target display position and a smoothed target display gesture;
according to the target display position and the target display gesture, the fusion display target human body model and the target clothes model comprise:
and according to the smoothed target display position and the smoothed target display gesture, the target human body model and the target clothes model are displayed in a fusion mode.
In order to further improve the virtual fitting effect, in this embodiment, instead of directly performing virtual fitting by using the obtained target display position and target display gesture, the target display position and target display gesture are subjected to smoothing filter processing according to a configured smoothing filter algorithm to obtain a smoothed target display position and smoothed target display gesture, and then the target human body model and the target clothing model are displayed in a fusion manner according to the smoothed target display position and smoothed target display gesture.
It should be noted that, in this embodiment, what smoothing filtering algorithm is adopted is not particularly limited, and may be configured by those skilled in the art according to actual needs, for example, in this embodiment, a one-euro filtering algorithm may be adopted to perform smoothing filtering processing on the target display position and the target display pose.
Referring to fig. 8, in order to better execute the body type estimating method provided in the present application, the present application further provides a body type estimating device 400, as shown in fig. 8, the body type estimating device 400 includes:
an estimation module 410, configured to obtain an object image of a human body of the object, and perform pose estimation on the object image through a pose estimation model, so as to obtain human body pose estimation information and body shape estimation information of the human body of the object, and equipment pose estimation information of an image acquisition equipment of the object image;
the first detection module 420 is configured to perform skeletal point detection on the object image to obtain skeletal point information of the object human body;
the second detection module 430 is configured to perform human body contour detection on the object image to obtain human body contour information of the object human body;
the optimization module 440 is configured to optimize the body type estimation information according to the body posture estimation information and the device posture estimation information with the skeletal point information and the body contour information as constraints, so as to obtain target body type information of the subject body.
In an alternative embodiment, the object image includes object images of a plurality of preset angles of the human body of the object, and the optimization module 440 is configured to:
fusing body type estimation information corresponding to a plurality of object images with preset angles to obtain initial body type information of an object human body;
and taking skeleton point information and human body contour information corresponding to the object images with the plurality of preset angles as constraints, and carrying out optimization processing on the initial body type information according to human body posture estimation information and equipment posture estimation information corresponding to the object images with the plurality of preset angles to obtain target body type information of the object human body.
In an alternative embodiment, the body type estimation device 400 provided in the present application further includes a third detection module, configured to:
detecting the human body region of the object image to obtain a human body boundary frame of the object image;
according to the human body boundary box, intercepting a human body sub-image in the object image;
the estimation module 410 is configured to perform pose estimation on the human body sub-image through a pose estimation model, so as to obtain human body pose estimation information, body shape estimation information of the human body of the object, and device pose estimation information of the image acquisition device of the image of the object.
In an alternative embodiment, the body type estimating device 400 provided in the present application further includes a display module for:
Acquiring a target human body model matched with the body type of the target human body according to the target body type information;
and in response to the selection operation of the clothes model, fusing the target clothes model designated by the selection operation to the target human body model for display.
In an alternative embodiment, the display module is configured to:
acquiring a preset human body model of a standard body type;
and performing deformation processing on the preset human body model according to the target body type information to obtain a target human body model matched with the body type of the target human body.
In an alternative embodiment, when the target clothing model corresponds to a standard body type, the display module is configured to:
obtaining deformation information between a target human body model and a preset human body model;
performing deformation treatment on the target clothes model according to the deformation information to obtain a deformed target clothes model;
and fusing the deformed target clothing model to a target human body model for display.
In an alternative embodiment, the display module is configured to:
obtaining deformation information of each group of corresponding human model units between the target human model and the preset human model;
performing deformation processing on the target clothes model according to the deformation information to obtain a deformed target clothes model, wherein the deformation processing comprises the following steps:
According to deformation information of each group of corresponding human body model units, deformation vectors of corresponding clothes model units of each group of corresponding human body model units in a target clothes model are obtained;
performing deformation processing on each clothing model unit according to the deformation vector of each clothing model unit to obtain each deformed clothing model unit;
and obtaining a deformed target clothes model according to each deformed clothes model unit.
In an alternative embodiment, the display module is further configured to:
acquiring a real-time object image of a human body of an object;
carrying out gesture estimation on the real-time object image through a gesture estimation model to obtain real-time human body gesture estimation information of the human body of the object and real-time equipment gesture estimation information of image acquisition equipment of the real-time object image;
if the real-time equipment posture estimation information is based on weak perspective projection, performing projection conversion on the real-time equipment posture estimation information to obtain full perspective real-time equipment posture estimation information based on full perspective projection;
determining a target display position of the target human body model according to the full-perspective real-time equipment posture estimation information, and determining a target display posture of the target human body model according to the real-time human body posture estimation information;
And according to the target display position and the target display gesture, the target human body model and the target clothes model are displayed in a fusion mode.
In an optional embodiment, the body type estimation device 400 provided in the present application further includes a smoothing module, configured to perform smoothing processing on the target display position and the target display gesture, to obtain a smoothed target display position and a smoothed target display gesture;
the display module is used for fusion display of the target human body model and the target clothes model according to the smoothed target display position and the smoothed target display gesture.
It should be noted that, the body type estimating device 400 provided in the embodiment of the present application and the body type estimating method in the above embodiment belong to the same concept, and detailed implementation processes thereof are described in the above related embodiments, which are not repeated here.
The embodiment of the application also provides an electronic device, which comprises a memory and a processor, wherein the processor is used for executing the steps in the body shape estimation method provided by the embodiment by calling the computer program stored in the memory.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present disclosure.
The electronic device 100 may include a network interface 110, a memory 120, a processor 130, screen components, and the like. Those skilled in the art will appreciate that the configuration of the electronic device 100 shown in fig. 9 does not constitute a limitation of the electronic device 100, and may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components.
The network interface 110 may be used to make network connections between devices.
Memory 120 may be used to store computer programs and data. The memory 120 stores a computer program having executable code included therein. The computer program may be divided into various functional modules. The processor 130 executes various functional applications and data processing by running a computer program stored in the memory 120.
The processor 130 is a control center of the electronic device 100, connects various parts of the entire electronic device 100 using various interfaces and lines, and performs various functions of the electronic device 100 and processes data by running or executing computer programs stored in the memory 120 and calling data stored in the memory 120, thereby controlling the electronic device 100 as a whole.
In the embodiment of the present application, the processor 130 in the electronic device 100 loads executable codes corresponding to one or more computer programs into the memory 120 according to the following instructions, and the steps in the body shape estimation method provided in the present application are executed by the processor 130, for example:
acquiring an object image of an object human body, and carrying out posture estimation on the object image through a posture estimation model to obtain human body posture estimation information and body type estimation information of the object human body and equipment posture estimation information of image acquisition equipment of the object image;
Performing skeleton point detection on the object image to obtain skeleton point information of the object human body;
detecting the human body contour of the object image to obtain human body contour information of the object human body;
and optimizing the body type estimation information according to the human body posture estimation information and the equipment posture estimation information by taking the skeleton point information and the human body contour information as constraints to obtain target body type information of the human body of the object.
It should be noted that, the electronic device 100 provided in the embodiment of the present application and the body type estimation method in the above embodiment belong to the same concept, and detailed implementation processes of the method are described in the above related embodiments, which are not repeated here.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed on a processor of an electronic device provided in an embodiment of the present application, causes the processor of the electronic device to perform any of the steps in the above body shape estimation method suitable for the electronic device. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
The foregoing has described in detail a body type estimating method, apparatus, storage medium and electronic device provided by the present application, and specific examples have been applied herein to illustrate the principles and embodiments of the present application, and the above examples are only used to help understand the method and core idea of the present application; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (12)

1. A body type estimation method, comprising:
acquiring an object image of an object human body, and carrying out posture estimation on the object image through a posture estimation model to obtain human body posture estimation information and body type estimation information of the object human body and equipment posture estimation information of image acquisition equipment of the object image;
performing skeleton point detection on the object image to obtain skeleton point information of the object human body;
detecting the human body contour of the object image to obtain human body contour information of the object human body;
and taking the skeleton point information and the human body contour information as constraints, and carrying out optimization processing on the body type estimation information according to the human body posture estimation information and the equipment posture estimation information to obtain target body type information of the target human body.
2. The body type estimation method according to claim 1, wherein the object image includes object images of a plurality of preset angles of the object human body, and the optimizing processing is performed on the body type estimation information according to the human body posture estimation information and the device posture estimation information with the skeletal point information and the human body contour information as constraints to obtain target body type information of the object human body, comprising:
Fusing body type estimation information corresponding to the object images with the preset angles to obtain initial body type information of the object human body;
and taking skeleton point information and human body contour information corresponding to the object images with the plurality of preset angles as constraints, and carrying out optimization processing on the initial body type information according to human body posture estimation information and equipment posture estimation information corresponding to the object images with the plurality of preset angles to obtain target body type information of the object human body.
3. The body type estimation method according to claim 1, wherein before performing pose estimation on the object image by the pose estimation model to obtain the human body pose estimation information, the body type estimation information, and the device pose estimation information of the image acquisition device of the object image, further comprising:
detecting the human body region of the object image to obtain a human body boundary box of the object image;
according to the human body boundary box, human body sub-images in the object image are intercepted;
the performing gesture estimation on the object image through a gesture estimation model to obtain human body gesture estimation information and body type estimation information of the human body of the object and equipment gesture estimation information of an image acquisition equipment of the object image, including:
And carrying out posture estimation on the human body sub-images through the posture estimation model to obtain human body posture estimation information and body type estimation information of the human body of the object and equipment posture estimation information of the image acquisition equipment of the image of the object.
4. A body type estimating method according to any one of claims 1 to 3, wherein said optimizing said body type estimating information based on said body posture estimating information and said device posture estimating information with said skeletal point information and said body contour information as constraints, after obtaining target body type information of said subject's body, further comprises:
acquiring a target human body model matched with the body type of the target human body according to the target body type information;
and in response to the selection operation of the clothes model, fusing the target clothes model appointed by the selection operation to the target human body model for display.
5. The body type estimation method according to claim 4, wherein the acquiring a target human body model matching the body type of the subject human body based on the target body type information includes:
acquiring a preset human body model of a standard body type;
And carrying out deformation processing on the preset human body model according to the target body type information to obtain a target human body model matched with the body type of the target human body.
6. The body type estimation method according to claim 5, wherein when the target clothing model corresponds to the standard body type, the fusing the target clothing model specified by the selecting operation to the target human body model for display includes:
obtaining deformation information between the target human body model and the preset human body model;
performing deformation treatment on the target clothing model according to the deformation information to obtain a deformed target clothing model;
and fusing the deformed target clothing model to the target human body model for display.
7. The body type estimation method according to claim 6, wherein the acquiring deformation information between the target human body model and the preset human body model includes:
obtaining deformation information of each group of corresponding human model units between the target human model and the preset human model;
performing deformation processing on the target clothes model according to the deformation information to obtain a deformed target clothes model, wherein the deformation processing comprises the following steps:
According to the deformation information of each group of corresponding human body model units, obtaining deformation vectors of corresponding clothes model units of each group of corresponding human body model units in the target clothes model;
performing deformation processing on each clothing model unit according to the deformation vector of each clothing model unit to obtain each deformed clothing model unit;
and obtaining the deformed target clothing model according to each deformed clothing model unit.
8. The body type estimation method according to claim 4, wherein after the fusing the target clothing model specified by the selecting operation to the target human model for display, further comprising:
acquiring a real-time object image of the object human body;
carrying out gesture estimation on the real-time object image through the gesture estimation model to obtain real-time human body gesture estimation information of the object human body and real-time equipment gesture estimation information of image acquisition equipment of the real-time object image;
if the real-time equipment posture estimation information is based on weak perspective projection, performing projection conversion on the real-time equipment posture estimation information to obtain full perspective real-time equipment posture estimation information based on full perspective projection;
Determining a target display position of the target human body model according to the full-perspective real-time equipment posture estimation information, and determining a target display posture of the target human body model according to the real-time human body posture estimation information;
and according to the target display position and the target display gesture, the target human body model and the target clothes model are displayed in a fusion mode.
9. The body type estimation method according to claim 8, wherein before the fusion display of the target mannequin and the target clothing model according to the target display position and the target display posture, further comprising:
performing smoothing processing on the target display position and the target display gesture to obtain a smoothed target display position and a smoothed target display gesture;
and according to the target display position and the target display gesture, the target human body model and the target clothes model are displayed in a fusion mode, and the method comprises the following steps:
and according to the smoothed target display position and the smoothed target display gesture, the target human body model and the target clothes model are displayed in a fusion mode.
10. A body type estimating apparatus, comprising:
The estimating module is used for acquiring an object image of an object human body, carrying out gesture estimation on the object image through a gesture estimating model, and obtaining human body gesture estimating information and body type estimating information of the object human body and equipment gesture estimating information of image acquisition equipment of the object image;
the first detection module is used for detecting skeleton points of the object image to obtain skeleton point information of the object human body;
the second detection module is used for detecting the human body contour of the object image to obtain the human body contour information of the object human body;
and the optimization module is used for optimizing the body type estimation information according to the human body posture estimation information and the equipment posture estimation information by taking the skeleton point information and the human body contour information as constraints to obtain target body type information of the human body of the object.
11. A storage medium having stored thereon a computer program, which when loaded by a processor performs the steps of the body shape estimation method according to any of claims 1-9.
12. An electronic device comprising a processor and a memory, the memory storing a computer program, characterized in that the processor is adapted to perform the steps of the body shape estimation method according to any of claims 1 to 9 by loading the computer program.
CN202111529770.1A 2021-12-14 2021-12-14 Body type estimating method, body type estimating device, storage medium and electronic equipment Pending CN116266408A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111529770.1A CN116266408A (en) 2021-12-14 2021-12-14 Body type estimating method, body type estimating device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111529770.1A CN116266408A (en) 2021-12-14 2021-12-14 Body type estimating method, body type estimating device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116266408A true CN116266408A (en) 2023-06-20

Family

ID=86742951

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111529770.1A Pending CN116266408A (en) 2021-12-14 2021-12-14 Body type estimating method, body type estimating device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116266408A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117314976A (en) * 2023-10-08 2023-12-29 玩出梦想(上海)科技有限公司 Target tracking method and data processing equipment
CN117314976B (en) * 2023-10-08 2024-05-31 玩出梦想(上海)科技有限公司 Target tracking method and data processing equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117314976A (en) * 2023-10-08 2023-12-29 玩出梦想(上海)科技有限公司 Target tracking method and data processing equipment
CN117314976B (en) * 2023-10-08 2024-05-31 玩出梦想(上海)科技有限公司 Target tracking method and data processing equipment

Similar Documents

Publication Publication Date Title
US10679046B1 (en) Machine learning systems and methods of estimating body shape from images
CN109636831B (en) Method for estimating three-dimensional human body posture and hand information
US11640672B2 (en) Method and system for wireless ultra-low footprint body scanning
US20180144237A1 (en) System and method for body scanning and avatar creation
US20190244407A1 (en) System, device, and method of virtual dressing utilizing image processing, machine learning, and computer vision
Khamis et al. Learning an efficient model of hand shape variation from depth images
US10628666B2 (en) Cloud server body scan data system
CN108475439B (en) Three-dimensional model generation system, three-dimensional model generation method, and recording medium
KR101911133B1 (en) Avatar construction using depth camera
CN106373178B (en) Apparatus and method for generating artificial image
US20110298897A1 (en) System and method for 3d virtual try-on of apparel on an avatar
US20090115777A1 (en) Method of Generating and Using a Virtual Fitting Room and Corresponding System
CN114663199A (en) Dynamic display real-time three-dimensional virtual fitting system and method
US11836862B2 (en) External mesh with vertex attributes
US11908083B2 (en) Deforming custom mesh based on body mesh
TR201815349T4 (en) Improved virtual trial simulation service.
Hu et al. 3DBodyNet: fast reconstruction of 3D animatable human body shape from a single commodity depth camera
US11321916B1 (en) System and method for virtual fitting
WO2023039462A1 (en) Body fitted accessory with physics simulation
CN111767817A (en) Clothing matching method and device, electronic equipment and storage medium
Caliskan et al. Multi-view consistency loss for improved single-image 3d reconstruction of clothed people
Liu et al. Skeleton tracking based on Kinect camera and the application in virtual reality system
WO2018182938A1 (en) Method and system for wireless ultra-low footprint body scanning
CN116266408A (en) Body type estimating method, body type estimating device, storage medium and electronic equipment
US11849790B2 (en) Apparel fitting simulation based upon a captured two-dimensional human body posture image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination