CN116883599B - Clothing try-on system based on three-dimensional modeling technology - Google Patents

Clothing try-on system based on three-dimensional modeling technology Download PDF

Info

Publication number
CN116883599B
CN116883599B CN202310909264.8A CN202310909264A CN116883599B CN 116883599 B CN116883599 B CN 116883599B CN 202310909264 A CN202310909264 A CN 202310909264A CN 116883599 B CN116883599 B CN 116883599B
Authority
CN
China
Prior art keywords
image
dimensional
human body
module
try
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310909264.8A
Other languages
Chinese (zh)
Other versions
CN116883599A (en
Inventor
俞周杰
俞月渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shierlan Garment Co ltd
Original Assignee
Shenzhen Shierlan Garment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shierlan Garment Co ltd filed Critical Shenzhen Shierlan Garment Co ltd
Priority to CN202310909264.8A priority Critical patent/CN116883599B/en
Publication of CN116883599A publication Critical patent/CN116883599A/en
Application granted granted Critical
Publication of CN116883599B publication Critical patent/CN116883599B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the field of three-dimensional modeling, and discloses a clothing try-on system based on a three-dimensional modeling technology, which comprises a first acquisition module, a storage module, a comparison module, a model fusion module, a second acquisition module and a modeling module; the first acquisition module is used for acquiring two-dimensional images of a plurality of test persons; the storage module is used for storing a plurality of three-dimensional first human body models and two-dimensional projection images; the comparison module is used for judging whether the storage module has a first human body model meeting the requirements; the model fusion module is used for fusing the first human body model with the three-dimensional model of the garment selected by the try-on personnel to obtain a first try-on model; the second acquisition module is used for acquiring three-dimensional point cloud data of the try-on personnel; the modeling module is used for obtaining a second human body model; the model fusion module is used for obtaining a second try-on model. The invention reduces the probability of re-modeling and improves the speed of obtaining the test result.

Description

Clothing try-on system based on three-dimensional modeling technology
Technical Field
The invention relates to the field of three-dimensional modeling, in particular to a clothing try-on system based on a three-dimensional modeling technology.
Background
In the prior art, when a three-dimensional modeling technology is utilized to try on clothes, a try-on person is generally required to be scanned first to obtain a human body model of the try-on person, and then the three-dimensional clothes model and the human body model are fused to obtain a final try-on result. However, since data for modeling is acquired each time and then modeling is performed, the speed of obtaining the test results is relatively slow, resulting in an insufficient test experience.
Disclosure of Invention
The invention aims to disclose a clothing try-on system based on a three-dimensional modeling technology, which solves the problem of how to improve the speed of obtaining try-on results when clothing try-on is carried out by using the three-dimensional modeling technology.
In order to achieve the above purpose, the present invention provides the following technical solutions:
the invention provides a clothing try-on system based on a three-dimensional modeling technology, which comprises a first acquisition module, a storage module, a comparison module, a model fusion module, a second acquisition module and a modeling module;
the first acquisition module is used for shooting the test person from a plurality of preset directions to obtain two-dimensional images of a plurality of test persons;
the storage module is used for storing a plurality of three-dimensional first human body models and a plurality of two-dimensional projection images obtained by projecting each first human body model in a plurality of preset directions;
the comparison module is used for judging whether the storage module has the first human body model meeting the requirements or not based on the two-dimensional image of the tested person and the two-dimensional projection image stored in the storage module, and if so, the first human body model meeting the requirements is sent to the model fusion module; if not, sending an acquisition instruction to a second acquisition module;
the model fusion module is used for fusing the first human body model sent by the comparison module with the three-dimensional model of the garment selected by the try-on personnel to obtain a first try-on model;
the second acquisition module is used for acquiring three-dimensional point cloud data of the try-on personnel after receiving the acquisition instruction;
the modeling module is used for modeling based on the three-dimensional point cloud data to obtain a second human body model;
the model fusion module is used for fusing the second human body model with the three-dimensional model of the garment selected by the try-on personnel to obtain a second try-on model.
Optionally, the preset plurality of directions include a front side of the human body, a left side of the human body, a right side of the human body, and a back side of the human body.
Optionally, when shooting the test person in a plurality of preset directions, the shooting heights and the shooting distances are the same.
Optionally, the judging, based on the two-dimensional image of the try-on person and the two-dimensional projection image stored in the storage module, whether the storage module has the first human body model meeting the requirement includes:
s1, storing the obtained two-dimensional images of a plurality of test persons into a collection dimimcf;
s2, randomly acquiring a two-dimensional image from the set dimimcf to serve as a calculation image;
s3, representing the direction corresponding to the calculated image as B, and acquiring a set proimgf of all the two-dimensional projection images obtained by projection in the direction B from a storage module;
s4, judging whether a two-dimensional projection image with the similarity larger than a set similarity threshold value exists in the proimgf or not, if so, entering S5, and if not, indicating that the storage module does not have a first human body model meeting the requirements;
s5, deleting the calculated image from the set dimimcf;
s7, storing the calculated images into a collection calimgf, judging whether the number of elements in the calimgf is the same as the number of the directions, if so, indicating that the storage module stores a first human body model meeting the requirements; if not, entering S8;
s8, judging whether the set dimimcf is an empty set, if so, indicating that the storage module does not have the first human body model meeting the requirements, and if not, entering S2.
Optionally, the calculating process of the similarity between the two-dimensional projection image in the proimgf and the calculated image includes:
s10, acquiring first comparison information of a two-dimensional projection image in the proimgf;
s20, obtaining second comparison information of a foreground part in the calculated image;
s30, calculating the similarity between the first comparison information and the second comparison information.
Optionally, acquiring the first comparison information of the two-dimensional projection image in the primgf includes:
first comparison information of the two-dimensional projection image is acquired from the storage module.
Optionally, acquiring the first comparison information of the two-dimensional projection image in the primgf includes:
and acquiring the outline of the two-dimensional projection image, and taking the outline of the two-dimensional projection image as first comparison information.
Optionally, obtaining second comparison information of the foreground portion in the computed image includes:
performing image segmentation calculation on the calculated image to obtain a foreground image in the calculated image;
scaling the foreground image according to the first comparison information to obtain a scaled image;
the outline of the scaled image is taken as second comparison information.
Optionally, performing image segmentation calculation on the calculated image to obtain a foreground image in the calculated image, including:
and dividing the calculated image by adopting an image segmentation algorithm based on a threshold value to obtain a foreground image in the calculated image.
Optionally, scaling the foreground image according to the first comparison information to obtain a scaled image, including:
acquiring the length and the width of the outline corresponding to the first comparison information;
and scaling the foreground image until the length of the contour of the foreground image is the same as the length of the contour corresponding to the first comparison information and the width of the contour of the foreground image is the same as the width of the contour corresponding to the first comparison information.
Compared with the prior art, before three-dimensional modeling is carried out, two-dimensional images of a person to be tested are acquired from a plurality of preset directions, whether a first human body model meeting the requirements is contained in a storage module is judged based on the two-dimensional images, and if yes, the first human body model is directly used for generating the first test model; if not, the modeling is performed again, so that the probability of re-modeling is reduced, the speed of obtaining the test result is improved, and the probability of obtaining better test experience of the test personnel is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
FIG. 1 is a schematic diagram of a garment fitting system based on three-dimensional modeling techniques of the present invention.
FIG. 2 is a schematic diagram of a process for determining whether a storage module has a first mannequin meeting requirements according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a clothing try-on system based on a three-dimensional modeling technology, which is provided with a first acquisition module, a storage module, a comparison module, a model fusion module, a second acquisition module and a modeling module, wherein the first acquisition module is used for acquiring a first model of a clothing;
the first acquisition module is used for shooting the test person from a plurality of preset directions to obtain two-dimensional images of a plurality of test persons;
the storage module is used for storing a plurality of three-dimensional first human body models and a plurality of two-dimensional projection images obtained by projecting each first human body model in a plurality of preset directions;
the comparison module is used for judging whether the storage module has the first human body model meeting the requirements or not based on the two-dimensional image of the tested person and the two-dimensional projection image stored in the storage module, and if so, the first human body model meeting the requirements is sent to the model fusion module; if not, sending an acquisition instruction to a second acquisition module;
the model fusion module is used for fusing the first human body model sent by the comparison module with the three-dimensional model of the garment selected by the try-on personnel to obtain a first try-on model;
the second acquisition module is used for acquiring three-dimensional point cloud data of the try-on personnel after receiving the acquisition instruction;
the modeling module is used for modeling based on the three-dimensional point cloud data to obtain a second human body model;
the model fusion module is used for fusing the second human body model with the three-dimensional model of the garment selected by the try-on personnel to obtain a second try-on model.
Compared with the prior art, before three-dimensional modeling is carried out, two-dimensional images of a person to be tested are acquired from a plurality of preset directions, whether a first human body model meeting the requirements is contained in a storage module is judged based on the two-dimensional images, and if yes, the first human body model is directly used for generating the first test model; if not, the modeling is performed again, so that the probability of re-modeling is reduced, the speed of obtaining the test result is improved, and the probability of obtaining better test experience of the test personnel is improved.
Optionally, the storage module is further configured to obtain a first mannequin after the desensitization processing is performed on the second mannequin, and store the first mannequin.
Specifically, after the agreement of the human body is obtained, the second human body model is subjected to desensitization treatment, and information, such as facial contours, associated with the human body to be tested in the second human body model is removed, so that the first human body model is obtained.
At the beginning of the system operation, a plurality of first human body models generated according to the standard human body shape are stored in the storage module. Then, as the running time of the system becomes longer, the number of the first mannequins in the storage module is effectively increased because the second mannequins corresponding to the body types of the test persons are continuously processed into the first mannequins, so that after the running time is longer, the probability of obtaining the second mannequins by re-modeling is very low along with the large increase of the number of users, because the storage module contains the first mannequins of most types of body types.
Specifically, before the second human body model is subjected to desensitization processing, the storage module calculates the similarity between the second human body model and the first human body model stored by the storage module, and if the first human body model which is highly similar to the second human body model exists in the storage module, the second human body model is not subjected to desensitization processing. By the processing mode, the situation that the calculation efficiency of the comparison module is affected due to the fact that the number of the first human models in the storage module is too large can be avoided.
Optionally, the fusion of the first mannequin and the three-dimensional model of the garment may be achieved by:
and displaying the three-dimensional model of the garment and the first human body model in the same three-dimensional coordinate, so as to obtain a first try-on model.
Specifically, when the three-dimensional model of the garment is greater than or equal to the first human body model, the first human body model is shielded by the three-dimensional model of the garment except for the head, neck, palm and other areas, so that a virtual try-on result is obtained. When the three-dimensional model is smaller than the first human body model, the three-dimensional model is shielded by the first human body model, the condition of wearing the model occurs, and the fact that the size of the clothes selected by the try-on personnel is unsuitable is indicated.
The specific second try-on model is obtained in the same way as the first try-on model.
Optionally, the preset plurality of directions include a front side of the human body, a left side of the human body, a right side of the human body, and a back side of the human body.
Specifically, the direction corresponding to the front of the human body refers to the direction perpendicular to the plane of the face of the human body. The direction corresponding to the back of the human body means a direction forming 180 degrees with the direction corresponding to the front of the human body. The directions corresponding to the left side and the right side of the human body are perpendicular to the directions corresponding to the front side of the human body, and when the front side of the try-on person faces the first acquisition module, the directions corresponding to the left side of the human body are the directions of the right hand side of the try-on person.
Optionally, when shooting the test person in a plurality of preset directions, the shooting heights and the shooting distances are the same.
Specifically, the photographing height and the photographing distance are kept the same, so that the heights of the photographed test persons can be kept consistent. The shooting height refers to the height of the lens of the first acquisition module, and the shooting distance is the distance between the lens and the try-on person.
Optionally, as shown in fig. 2, determining whether the storage module has the first human body model meeting the requirement based on the two-dimensional image of the try-on person and the two-dimensional projection image stored in the storage module includes:
s1, storing the obtained two-dimensional images of a plurality of test persons into a collection dimimcf;
s2, randomly acquiring a two-dimensional image from the set dimimcf to serve as a calculation image;
s3, representing the direction corresponding to the calculated image as B, and acquiring a set proimgf of all the two-dimensional projection images obtained by projection in the direction B from a storage module;
s4, judging whether a two-dimensional projection image with the similarity larger than a set similarity threshold value exists in the proimgf or not, if so, entering S5, and if not, indicating that the storage module does not have a first human body model meeting the requirements;
s5, deleting the calculated image from the set dimimcf;
s7, storing the calculated images into a collection calimgf, judging whether the number of elements in the calimgf is the same as the number of the directions, if so, indicating that the storage module stores a first human body model meeting the requirements; if not, entering S8;
s8, judging whether the set dimimcf is an empty set, if so, indicating that the storage module does not have the first human body model meeting the requirements, and if not, entering S2.
Specifically, the invention obtains a two-dimensional image from the dimimcf for comparison every time, and stops comparing as long as inconsistency occurs, thereby effectively improving the overall comparison efficiency, and the rest images which are not compared in the dimimcf, namely the images with calculated similarity are not compared any more. In addition, the similarity of the two-position images in each direction is required to be met to determine that the storage module stores the first human body model meeting the requirements, so that the judging accuracy is improved, and if only part of directions meet the requirements, the obtained similarity between the contour of the first human body model and the contour of the body type of the tested person may not be high enough, the body type of the tested person cannot be well represented, and the accurate test result is not beneficial to obtaining.
Optionally, when the three-dimensional model is projected onto the two-dimensional plane, a perspective projection algorithm is adopted to realize that the relative position relationship between the projection center of the perspective projection and the first human body model is set according to the relative position relationship between the lens of the first shooting module and the try-on person, so that the following relational expression is established:
da/ha=db/hb
the distance between the projection center of the perspective projection and the first human body model is da, and the height between the projection center of the perspective projection and the plane where the bottom of the first human body model is located is ha; the height of the lens of the first acquisition module is hb, and the distance between the lens and the try-on person is db.
In the implementation process, the relative position relationship between the projection center of perspective projection and the first human body model is obtained based on the relative position relationship between the lens of the first acquisition module and the try-on person, and the arrangement mode can enable the corresponding two-dimensional image and the corresponding two-dimensional projection image of the same object to have the same outline, so that a more accurate comparison result can be obtained when the contour comparison is carried out subsequently.
Specifically, the projection center of the perspective projection is the position from which the projection ray is emitted. The projection ray intersects the projection plane, thereby obtaining a projection of the object.
In the present invention, the position of the projection plane varies with the direction for different directions in which the projection is performed, for example, the projection plane is located behind the first mannequin when the projection is performed from the front of the first mannequin. The projection plane is perpendicular to the plane in which the bottom of the first mannequin is located. Among all the projection rays emitted from the projection center, there are projection rays that can be perpendicular to the projection plane.
Because the two-dimensional projection image is obtained by adopting a virtual projection mode, the invention only stores the part of effective information, namely the part of the two-dimensional projection image which does not contain the background, and all the pixel points are the pixel points obtained after projection.
In one embodiment, when shooting a person to be tested in a plurality of preset directions, the shooting height and distance may be determined according to the set relative positional relationship between the projection center of the perspective projection and the first human body model, so that the following relationship holds:
da/ha=db/hb。
optionally, the calculating process of the similarity between the two-dimensional projection image in the proimgf and the calculated image includes:
s10, acquiring first comparison information of a two-dimensional projection image in the proimgf;
s20, obtaining second comparison information of a foreground part in the calculated image;
s30, calculating the similarity between the first comparison information and the second comparison information.
Specifically, whether the two images are consistent can be accurately judged by comparing the similarity.
Optionally, acquiring the first comparison information of the two-dimensional projection image in the primgf includes:
first comparison information of the two-dimensional projection image is acquired from the storage module.
In the invention, the first comparison information of the two-dimensional projection image is calculated first and stored in the storage module, and when the first comparison information is needed to be used, the first comparison information can be directly called without calculating again, thereby shortening the time for calculating the similarity and enabling the invention to obtain the test model more quickly.
Optionally, acquiring the first comparison information of the two-dimensional projection image in the primgf includes:
and acquiring the outline of the two-dimensional projection image, and taking the outline of the two-dimensional projection image as first comparison information.
Specifically, the contours in the image may be obtained by a contour recognition algorithm based on connectivity, a contour recognition algorithm based on edge detection, or the like.
Optionally, obtaining second comparison information of the foreground portion in the computed image includes:
performing image segmentation calculation on the calculated image to obtain a foreground image in the calculated image;
scaling the foreground image according to the first comparison information to obtain a scaled image;
the outline of the scaled image is taken as second comparison information.
Specifically, scaling is performed based on the first comparison information, so that the sizes of the two contours are as close as possible, thereby facilitating faster obtaining of the similarity comparison result.
And calculating a foreground image in the image, namely removing the background in the calculated image, and then testing the corresponding image of the area where the personnel are located.
Optionally, performing image segmentation calculation on the calculated image to obtain a foreground image in the calculated image, including:
and dividing the calculated image by adopting an image segmentation algorithm based on a threshold value to obtain a foreground image in the calculated image.
Specifically, other algorithms with segmentation capability, such as a neural network-based segmentation algorithm, may also be used to obtain the foreground image in the computed image.
Optionally, the image segmentation algorithm based on the threshold is used to segment the calculated image, so as to obtain a foreground image in the calculated image, including:
performing wavelet threshold processing on the calculated image to obtain a filtered image;
and carrying out image segmentation calculation on the filtered image by adopting an image segmentation algorithm based on a threshold value to obtain a foreground image in the filtered image.
Specifically, after the wavelet thresholding is performed, the image noise in the obtained filtered image can be effectively reduced, so that more accurate contours can be obtained.
Optionally, performing wavelet thresholding on the computed image to obtain a filtered image, including:
partitioning a calculated image, dividing the calculated image into N multiplied by M sub-images, wherein N represents the number of the horizontal sub-images, and M represents the number of the vertical sub-images;
performing wavelet threshold processing on each sub-image respectively to obtain processed sub-images;
and merging the processed sub-images to obtain a filtered image.
In the prior art, wavelet threshold processing is generally directly performed on all pixel points in a calculated image, but the processing mode does not consider that the noise levels in different areas are inconsistent, so that in a filtered image obtained by using the processing mode, the detail loss of the image in an area with higher noise level is serious, thereby reducing the detail in the obtained filtered image and being unfavorable for the accurate profile obtained later. Because the contour of the image is closely related to the image details.
Optionally, wavelet thresholding is performed on the sub-image to obtain a processed sub-image, including:
calculating wavelet decomposition times D of the sub-images;
d times of wavelet decomposition is carried out on the sub-image based on the times of wavelet decomposition, so that a plurality of low-frequency coefficients and a plurality of high-frequency coefficients are obtained;
respectively carrying out soft threshold processing on each high-frequency coefficient to obtain processed high-frequency coefficients;
and carrying out wavelet reconstruction based on the low-frequency coefficient and the processed high-frequency coefficient to obtain a processed sub-image.
Specifically, in the invention, the wavelet decomposition times of different sub-images are not kept consistent, but are calculated according to the actual situation of the sub-images, so that the decomposition times are related to the noise level, and the image details in the sub-images obtained after the soft threshold processing are more effectively reserved.
Optionally, the wavelet decomposition number D is calculated by:
calculating an image judgment value imgjd in an image corresponding to a high frequency coefficient obtained by the kth wavelet decomposition k
Calculating an image judgment value imgjd in an image corresponding to a high frequency coefficient obtained by the kth-1 wavelet decomposition k-1
Calculating imgjd k And imgjd k-1 The absolute value of the difference between the two values is decomposed in the kth wavelet if the absolute value is smaller than the set thresholdAfter that, the wavelet decomposition is completed, and the value of k is set as the value of the wavelet decomposition number D.
Specifically, the wavelet decomposition times are obtained based on the magnitude difference between the image judgment values of the high-frequency coefficients obtained by two adjacent wavelet decomposition times, and the image judgment values show a trend of increasing rapidly and then slowly along with the increase of the wavelet decomposition times, so that in order to improve the calculation efficiency, on the premise of ensuring the effect of wavelet threshold processing, the wavelet decomposition is finished when the rapid increase trend is finished, so that the obtained high-frequency coefficients contain enough noise information, and the sub-images with effectively reduced noise levels can be obtained by carrying out threshold processing on the noise information.
Optionally, the image judgment value imgjd k The calculation function of (2) is:
alpha and beta are a first weight and a second weight, the sum between the first weight and the second weight is 1, img represents the set of pixel points in the image corresponding to the high-frequency coefficient, nfimg represents the total number of pixel points in img, imgpixv s The pixel value representing pixel point s, imgpixsh represents the variance of the preset pixel value, nfig q The total number of pixel points having a pixel value q is represented, and sigamu represents a set calculation coefficient.
Specifically, the image judgment values represent the variances among the pixel values of the pixel points, and also represent the detail content among the pixel points, so that the larger the variance is, the larger the image judgment value in the image with the larger detail content is, and according to the characteristics of wavelet decomposition, more image noise is introduced while more details are obtained, therefore, the larger the image judgment value is, the more detail information is in the image corresponding to the high-frequency coefficient, the more noise is, and the acquisition of enough noise information is facilitated to carry out wavelet threshold processing.
Specifically, the calculation coefficient may be a set image entropy.
Alternatively, imgjd k-1 Is calculated by the method and imgjd k The same way of calculation.
Alternatively, the high frequency coefficient may be any one of LH, HL, HH.
Specifically, the high-frequency coefficient here is a coefficient of a high-frequency portion obtained after performing wavelet decomposition once.
Optionally, soft thresholding is performed on each high frequency coefficient to obtain a processed high frequency coefficient, including:
for the high frequency coefficient q, the calculation function of the threshold value for soft thresholding is:
thre q represents a threshold value for soft thresholding the high frequency coefficient q, λ represents a preset calculation parameter, λe (0, 1), σ represents a standard deviation of a set image noise, mxfl represents a preset positive integer, imgnosi q Representing the noise content in the image corresponding to the high-frequency coefficient q, avenosi representing the preset standard value of the noise content, stdthre representing the set standard deviation, nq representing the number of times of wavelet decomposition corresponding to the high-frequency coefficient q.
Specifically, the left part of the calculation function is a traditional wavelet threshold calculation mode, but the calculation mode does not consider the difference between image noise in high-frequency coefficients obtained by different wavelet decomposition times, so that a single threshold cannot be suitable for a soft threshold processing process of the high-frequency coefficients obtained by different decomposition times.
Therefore, the number of times of wavelet decomposition and the noise content of the obtained high-frequency coefficient are added to the right part of the calculation function, so that the larger the noise content is, the larger the threshold value of soft threshold processing is carried out on the high-frequency coefficient with more times of wavelet decomposition, and a more effective filtering result is obtained. So that the image details in the sub-images obtained after the soft thresholding are more effectively preserved.
Optionally, scaling the foreground image according to the first comparison information to obtain a scaled image, including:
acquiring the length and the width of the outline corresponding to the first comparison information;
and scaling the foreground image until the length of the contour of the foreground image is the same as the length of the contour corresponding to the first comparison information and the width of the contour of the foreground image is the same as the width of the contour corresponding to the first comparison information.
It will be apparent to those skilled in the art that various modifications and variations can be made to the embodiments of the present application without departing from the scope of the embodiments of the present application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to encompass such modifications and variations.

Claims (9)

1. The clothing try-on system based on the three-dimensional modeling technology is characterized by comprising a first acquisition module, a storage module, a comparison module, a model fusion module, a second acquisition module and a modeling module;
the first acquisition module is used for shooting the test person from a plurality of preset directions to obtain two-dimensional images of a plurality of test persons;
the storage module is used for storing a plurality of three-dimensional first human body models and a plurality of two-dimensional projection images obtained by projecting each first human body model in a plurality of preset directions;
the comparison module is used for judging whether the storage module has the first human body model meeting the requirements or not based on the two-dimensional image of the tested person and the two-dimensional projection image stored in the storage module, and if so, the first human body model meeting the requirements is sent to the model fusion module; if not, sending an acquisition instruction to a second acquisition module;
the model fusion module is used for fusing the first human body model sent by the comparison module with the three-dimensional model of the garment selected by the try-on personnel to obtain a first try-on model;
the second acquisition module is used for acquiring three-dimensional point cloud data of the try-on personnel after receiving the acquisition instruction;
the modeling module is used for modeling based on the three-dimensional point cloud data to obtain a second human body model;
the model fusion module is used for fusing the second human body model with the three-dimensional model of the garment selected by the try-on personnel to obtain a second try-on model;
judging whether the storage module has the first human body model meeting the requirements or not based on the two-dimensional image of the try-on person and the two-dimensional projection image stored in the storage module, comprising:
s1, storing the obtained two-dimensional images of a plurality of test persons into a collection dimimcf;
s2, randomly acquiring a two-dimensional image from the set dimimcf to serve as a calculation image;
s3, representing the direction corresponding to the calculated image as B, and acquiring a set proimgf of all the two-dimensional projection images obtained by projection in the direction B from a storage module;
s4, judging whether a two-dimensional projection image with the similarity larger than a set similarity threshold value exists in the proimgf or not, if so, entering S5, and if not, indicating that the storage module does not have a first human body model meeting the requirements;
s5, deleting the calculated image from the set dimimcf;
s7, storing the calculated images into a collection calimgf, judging whether the number of elements in the calimgf is the same as the number of the directions, if so, indicating that the storage module stores a first human body model meeting the requirements; if not, entering S8;
s8, judging whether the set dimimcf is an empty set, if so, indicating that the storage module does not have the first human body model meeting the requirements, and if not, entering S2.
2. The garment try-on system based on the three-dimensional modeling technique according to claim 1, wherein the preset plurality of directions includes a front side of the human body, a left side of the human body, a right side of the human body, and a back side of the human body.
3. The garment fitting system based on the three-dimensional modeling technique according to claim 1, wherein when the fitting person is photographed in a predetermined plurality of directions, the photographed heights and distances are the same.
4. The clothing try-on system based on the three-dimensional modeling technology according to claim 1, wherein the calculation process of the similarity between the two-dimensional projection image in the proimgf and the calculation image comprises:
s10, acquiring first comparison information of a two-dimensional projection image in the proimgf;
s20, obtaining second comparison information of a foreground part in the calculated image;
s30, calculating the similarity between the first comparison information and the second comparison information.
5. The clothing try-on system based on the three-dimensional modeling technique according to claim 4, wherein acquiring the first comparison information of the two-dimensional projection image in the proimgf comprises:
first comparison information of the two-dimensional projection image is acquired from the storage module.
6. The clothing try-on system based on the three-dimensional modeling technique according to claim 4, wherein acquiring the first comparison information of the two-dimensional projection image in the proimgf comprises:
and acquiring the outline of the two-dimensional projection image, and taking the outline of the two-dimensional projection image as first comparison information.
7. The garment try-on system based on three-dimensional modeling technique of claim 5 or 6, wherein obtaining second comparison information of the foreground portion in the computed image comprises:
performing image segmentation calculation on the calculated image to obtain a foreground image in the calculated image;
scaling the foreground image according to the first comparison information to obtain a scaled image;
the outline of the scaled image is taken as second comparison information.
8. The garment try-on system based on the three-dimensional modeling technique according to claim 7, wherein the image segmentation calculation is performed on the calculated image to obtain a foreground image in the calculated image, and the method comprises:
and dividing the calculated image by adopting an image segmentation algorithm based on a threshold value to obtain a foreground image in the calculated image.
9. The garment try-on system based on three-dimensional modeling according to claim 7, wherein scaling the foreground image according to the first comparison information to obtain a scaled image comprises:
acquiring the length and the width of the outline corresponding to the first comparison information;
and scaling the foreground image until the length of the contour of the foreground image is the same as the length of the contour corresponding to the first comparison information and the width of the contour of the foreground image is the same as the width of the contour corresponding to the first comparison information.
CN202310909264.8A 2023-07-21 2023-07-21 Clothing try-on system based on three-dimensional modeling technology Active CN116883599B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310909264.8A CN116883599B (en) 2023-07-21 2023-07-21 Clothing try-on system based on three-dimensional modeling technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310909264.8A CN116883599B (en) 2023-07-21 2023-07-21 Clothing try-on system based on three-dimensional modeling technology

Publications (2)

Publication Number Publication Date
CN116883599A CN116883599A (en) 2023-10-13
CN116883599B true CN116883599B (en) 2024-02-06

Family

ID=88267844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310909264.8A Active CN116883599B (en) 2023-07-21 2023-07-21 Clothing try-on system based on three-dimensional modeling technology

Country Status (1)

Country Link
CN (1) CN116883599B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156966A (en) * 2014-08-11 2014-11-19 石家庄铁道大学 Pseudo 3D real-time virtual fitting method based on mobile terminal
CN107393011A (en) * 2017-06-07 2017-11-24 武汉科技大学 A kind of quick three-dimensional virtual fitting system and method based on multi-structured light vision technique
CN111862315A (en) * 2020-07-25 2020-10-30 南开大学 Human body multi-size measuring method and system based on depth camera
CN113012303A (en) * 2021-03-10 2021-06-22 浙江大学 Multi-variable-scale virtual fitting method capable of keeping clothes texture characteristics
CN114445271A (en) * 2022-04-01 2022-05-06 杭州华鲤智能科技有限公司 Method for generating virtual fitting 3D image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008571B (en) * 2014-06-12 2017-01-18 深圳奥比中光科技有限公司 Human body model obtaining method and network virtual fitting system based on depth camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156966A (en) * 2014-08-11 2014-11-19 石家庄铁道大学 Pseudo 3D real-time virtual fitting method based on mobile terminal
CN107393011A (en) * 2017-06-07 2017-11-24 武汉科技大学 A kind of quick three-dimensional virtual fitting system and method based on multi-structured light vision technique
CN111862315A (en) * 2020-07-25 2020-10-30 南开大学 Human body multi-size measuring method and system based on depth camera
CN113012303A (en) * 2021-03-10 2021-06-22 浙江大学 Multi-variable-scale virtual fitting method capable of keeping clothes texture characteristics
CN114445271A (en) * 2022-04-01 2022-05-06 杭州华鲤智能科技有限公司 Method for generating virtual fitting 3D image

Also Published As

Publication number Publication date
CN116883599A (en) 2023-10-13

Similar Documents

Publication Publication Date Title
JP4723834B2 (en) Photorealistic three-dimensional face modeling method and apparatus based on video
CN104573614B (en) Apparatus and method for tracking human face
KR100682889B1 (en) Method and Apparatus for image-based photorealistic 3D face modeling
CN111462206B (en) Monocular structure light depth imaging method based on convolutional neural network
KR101198322B1 (en) Method and system for recognizing facial expressions
CN112487921B (en) Face image preprocessing method and system for living body detection
CN106372629A (en) Living body detection method and device
KR20170008638A (en) Three dimensional content producing apparatus and three dimensional content producing method thereof
US8073253B2 (en) Machine learning based triple region segmentation framework using level set on PACS
KR101759188B1 (en) the automatic 3D modeliing method using 2D facial image
US10860755B2 (en) Age modelling method
KR20170092533A (en) A face pose rectification method and apparatus
CN111723687A (en) Human body action recognition method and device based on neural network
CN110176064A (en) A kind of photogrammetric main object automatic identifying method for generating threedimensional model
CN106778660A (en) A kind of human face posture bearing calibration and device
CN111540021A (en) Hair data processing method and device and electronic equipment
JP2012208759A (en) Method and program for improving accuracy of three-dimensional shape model
US20200036961A1 (en) Constructing a user's face model using particle filters
CN116883599B (en) Clothing try-on system based on three-dimensional modeling technology
JP2005317000A (en) Method for determining set of optimal viewpoint to construct 3d shape of face from 2d image acquired from set of optimal viewpoint
CN112017148A (en) Method and device for extracting single-joint skeleton contour
CN112749713B (en) Big data image recognition system and method based on artificial intelligence
US11816806B2 (en) System and method for foot scanning via a mobile computing device
Han et al. 3D human model reconstruction from sparse uncalibrated views
JP5688514B2 (en) Gaze measurement system, method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant