CN112330634A - Method and system for fine edge matting of clothing - Google Patents

Method and system for fine edge matting of clothing Download PDF

Info

Publication number
CN112330634A
CN112330634A CN202011224064.1A CN202011224064A CN112330634A CN 112330634 A CN112330634 A CN 112330634A CN 202011224064 A CN202011224064 A CN 202011224064A CN 112330634 A CN112330634 A CN 112330634A
Authority
CN
China
Prior art keywords
clothing
image
module
boundary
clothes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011224064.1A
Other languages
Chinese (zh)
Inventor
李小波
石矫龙
李昆仑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hengxin Shambala Culture Co ltd
Original Assignee
Hengxin Shambala Culture Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hengxin Shambala Culture Co ltd filed Critical Hengxin Shambala Culture Co ltd
Priority to CN202011224064.1A priority Critical patent/CN112330634A/en
Publication of CN112330634A publication Critical patent/CN112330634A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application relates to the technical field of image processing, in particular to a method and a system for fine edge matting aiming at clothing, wherein the method for fine edge matting aiming at the clothing comprises the following steps: step S110, performing characteristic identification on the shot clothing picture or clothing video to classify clothing in the clothing picture or clothing video into a preset category; step S120, correcting the preliminary clothing boundary extracted from the clothing picture or the clothing video according to the clothing boundary parameters classified into the types so as to obtain an accurate clothing boundary; and S130, dividing the clothing image from the clothing picture or the clothing video according to the accurate clothing boundary. The method and the device can ensure that the contour of the extracted main image is complete, and can also ensure that the main image is extracted thoroughly.

Description

Method and system for fine edge matting of clothing
Technical Field
The application relates to the technical field of image processing, in particular to a method and a system for fine edge matting of clothes.
Background
When image capturing is performed, it often happens that the color of a subject being captured is close to the color of a background, for example: in the process of shooting the clothes, the color of the clothes is close to the color of the space where the clothes are located, in this case, it is difficult to extract the main body image from the shot picture or video, even if the main body image can be extracted from the shot picture or video, the outline of the extracted main body image has a large defect due to the fact that the color of the main body is close to the color of the background, and the extracted main body image also has a background area or other areas, so that the main body image is not completely extracted.
Therefore, how to ensure the complete contour of the extracted subject image and ensure the thorough extraction of the subject image is a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The application provides a method and a system for fine edge matting of clothing, so as to ensure that the contour of an extracted main body image is complete and the extracted main body image is thorough.
In order to solve the technical problem, the application provides the following technical scheme:
a method for fine edge matting of a garment, comprising the steps of: step S110, performing characteristic identification on the shot clothing picture or clothing video to classify clothing in the clothing picture or clothing video into a preset category; step S120, correcting the preliminary clothing boundary extracted from the clothing picture or the clothing video according to the clothing boundary parameters classified into the types so as to obtain an accurate clothing boundary; and S130, dividing the clothing image from the clothing picture or the clothing video according to the accurate clothing boundary.
The method for fine edge matting for a garment as described above preferably further comprises the following steps: step S140, calculating the area of each region surrounded by the boundaries of the divided clothing images so as to carry out smoothing processing and filling processing on the boundaries of the divided clothing images; and S150, identifying the mannequin parts except the clothes in the clothes image, and removing the area occupied by the mannequin parts to form an accurate clothes image.
The method for fine edge matting for clothing as described above, wherein preferably, the step of performing feature recognition on the taken clothing picture or clothing video to classify the clothing in the clothing picture or clothing video into a predetermined category, includes the following sub-steps: step S111, converting each frame image in the shot clothing picture or clothing video into a gray level image; step S112, expanding the low gray value part in the gray image, and compressing the high gray value part in the gray image to stretch the gray image; step S113, extracting clothing boundaries from the stretched image; and step S114, obtaining the characteristics of the clothes from the extracted clothes boundaries, and inputting the characteristics of the clothes into a clothes classification model so as to classify the clothes into a preset category.
The method for fine edge matting for clothing as described above, wherein preferably training forms a clothing classification model, includes the following sub-steps: step S101, collecting parameters of different types of clothes in advance to form a feature vector set; step S102, inputting the formed feature vector set into a DBN classification model, and training the DBN classification model to obtain different sub-classification models; and S103, fusing different sub-classification models to obtain a clothing classification model.
The method for fine edge matting for a garment as described above, wherein preferably, the area of each region surrounded by the boundaries of the garment image is calculated, comprises the following sub-steps: step S141, determining a certain number of break points according to the curvature of each segment of the boundary, and forming a polygonal image through the break points; and S142, carrying out area calculation on the polygonal image to obtain the area of each area surrounded by the boundary of the clothing image.
An edge fine matting system for a garment, comprising: the device comprises an identification classification module, a correction module and a segmentation module; the identification and classification module is used for carrying out characteristic identification on the shot clothing pictures or clothing videos so as to classify clothing in the clothing pictures or clothing videos into a preset category; the correction module corrects the preliminary clothing boundary extracted from the clothing picture or the clothing video according to the clothing boundary parameters classified into the types so as to obtain an accurate clothing boundary; the segmentation module segments the clothing image from the clothing picture or the clothing video according to the accurate clothing boundary.
The edge fine matting system for a garment as described above, wherein preferably, further comprising: the system comprises a processing module and a mannequin removing module; the processing module calculates the area of each region surrounded by the boundaries of the segmented clothing images so as to carry out smoothing processing and filling processing on the boundaries of the segmented clothing images; the mannequin removing module identifies mannequin parts except the clothes in the clothes image and removes the area occupied by the mannequin parts to form an accurate clothes image.
The edge fine matting system for a garment as described above, wherein preferably the identification categorization module comprises: the device comprises a gray level conversion module, a stretching module, an extraction module and a feature recognition input module; the gray level conversion module converts each frame image in the shot clothing picture or clothing video into a gray level image; the stretching module expands the low gray value part in the gray image and compresses the high gray value part in the gray image so as to stretch the gray image; the extraction module extracts the boundary of the clothing from the stretched image; the characteristic recognition input module obtains the characteristics of the clothing from the extracted clothing boundaries and inputs the characteristics of the clothing into the clothing classification model so as to classify the clothing into a preset classification.
The edge fine matting system for a garment as described above, wherein preferably the training module comprises: the system comprises a feature vector set forming module, a DBN classification model and a fusion module; the feature vector set forming module collects parameters of different types of clothes in advance to form a feature vector set; the DBN classification model trains the formed feature vector set to obtain different sub-classification models; and the fusion module fuses different sub-classification models to obtain a clothing classification model.
The edge fine matting system for a garment as described above, wherein preferably the processing module comprises: a polygon forming module and an area calculating module; the polygon forming module determines a certain number of break points according to the curvature of each section of the boundary, and forms a polygon image through the break points; and the area calculation module is used for calculating the area of the polygonal image to obtain the area of each area surrounded by the boundary of the clothing image.
Compared with the background technology, the method and the system for fine edge matting of the clothes can ensure that the extracted main body image has complete outline and can also ensure that the main body image is extracted thoroughly.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is a flowchart of an edge fine matting method for a garment provided by an embodiment of the present application;
FIG. 2 is a flow chart of training a formed garment classification model provided by an embodiment of the present application;
FIG. 3 is a flow chart of garment categorization provided by embodiments of the present application;
FIG. 4 is a flow chart of area calculation provided by an embodiment of the present application;
fig. 5 is a schematic diagram of an edge fine matting system for a garment provided by an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
Example one
Referring to fig. 1, fig. 1 is a flowchart of a method for fine edge matting of a garment according to an embodiment of the present application;
the application provides a method for fine edge matting of clothing, which comprises the following steps:
step S110, performing characteristic identification on the shot clothing picture or clothing video to classify clothing in the clothing picture or clothing video into a preset category;
before the method is used, parameters of different types of clothes are collected in advance, and a clothes classification model is formed by training the parameters of the different types of clothes. Specifically, as shown in fig. 2, the method includes the following sub-steps:
step S101, collecting parameters of different types of clothes in advance to form a feature vector set;
feature vector set P ═ { z } formed by collecting parameters of different kinds of garments in advance1、z2......zi......znIn which z is1、z2......zi......znFor each garment parameter set, n is the number of parameter sets, zi={a1,a2,a3,a4,a5......ag},a1,a2,a3,a4,a5,......agRepresenting a parameter of a garment, g being the number of parameters of a garment.
For example: the feature vector set P includes a parameter set of each garment of 1000 half-sleeves, 1000 long-sleeves and 1000 trousers, and the parameter combination of each garment includes parameters such as a garment length, a garment width, an arm length, an arm width, a trousers length and a trousers width. This is, of course, an example, and in practice, clothing will be divided into more detailed categories, such as: the half sleeves are further divided into men's half sleeves, women's half sleeves, seven-part half sleeves, nine-part half sleeves and the like.
Step S102, inputting the formed feature vector set into a DBN classification model, and training the DBN classification model to obtain different sub-classification models;
the DBN model is a deep learning algorithm, has good capability of evolving along with time, and sets P (z) of feature vectors1、z2......zi......znInputting a DBN classification model, training the DBM classification model by utilizing a feature vector set P to obtain different sub-classification models DtWhere T is 1, 2, 3, … … T, that is, T sub-classification models are obtained.
S103, fusing different sub-classification models to obtain a clothing classification model;
in particular, model D is classified for different sub-classestGiven a corresponding weight betatThe classification model Y is obtained by the following formula:
Figure BDA0002763059440000051
when the garment is used, the garment is worn on the mannequin, then the garment and the mannequin are placed into the shooting device, the garment is shot from different angles, and a garment picture or a garment video is shot.
And then, carrying out feature recognition on the shot clothing picture or clothing video, and classifying the clothing in the clothing picture or clothing video into a preset category according to the recognized features.
Specifically, referring to fig. 3, the step of performing feature recognition on the taken clothing picture or clothing video to classify the clothing in the clothing picture or clothing video into a predetermined category includes the following sub-steps:
step S111, converting each frame image in the shot clothing picture or clothing video into a gray level image;
setting the values of three colors R, G, B of each pixel point in each frame of image in the taken clothing picture or clothing video to be the same, so as to perform graying processing on each frame of image in the taken clothing picture or clothing video, specifically, performing graying processing on each frame of image in the taken clothing picture or clothing video by the following formula R ═ G ═ wr ═ R + wg ═ G + wb ═ B, wherein wr, wg and wb are weights of R, G, B respectively.
Step S112, expanding the low gray value part in the gray image, and compressing the high gray value part in the gray image to stretch the gray image;
specifically, each pixel point in the gray level image is processed by a formula
Figure BDA0002763059440000061
And carrying out transformation, wherein s is the value of the transformed pixel point, r is the original value of the pixel point in the gray image, m is a threshold value and is preferable, m is 0.5, E is a stretching parameter and is preferable, E is 5, and the gray image is stretched through the formula.
Step S113, extracting clothing boundaries from the stretched image;
specifically, an absolute gradient value M of each pixel point in the stretched image is calculated:
Figure BDA0002763059440000062
and f (x, y) is a two-dimensional function corresponding to each pixel point in the stretched image, x and y are coordinate values of the pixel points, and if the absolute gradient value M (x, y) of the pixel point is greater than a preset threshold value, the pixel point belongs to a boundary point of the garment. All boundary points of the garment in the stretched image are calculated according to the method, and a set formed by all the boundary points of the garment is taken as the boundary of the garment to be extracted.
Step S114, obtaining the characteristics of the clothes from the extracted clothes boundary, and inputting the characteristics of the clothes into a clothes classification model so as to classify the clothes into a preset category;
specifically, characteristic parameters such as the length, the width, the sleeve length, the sleeve width, the length of trousers, the width of trousers and the like are obtained according to the extracted clothing boundary, the obtained parameters are input into a clothing classification model Y, and the clothing is classified into one of preset categories such as long sleeves, short sleeves, long skirts, short skirts, long trousers, short trousers and the like through the clothing classification model Y.
Continuing to refer to fig. 1, step S120 corrects the preliminary clothing boundary extracted from the clothing picture or the clothing video according to the clothing boundary parameters classified into the category, so as to obtain an accurate clothing boundary;
for example, after classifying the garment into men's long-sleeve shirts according to the garment classification model Y, preset boundary parameters of the men's long-sleeve shirts (for example, proportional relations of parameters such as the length, width, length, width and the like of the men's long-sleeve shirts) are obtained. According to the preliminary garment boundary extracted from the garment picture or the garment video in the steps S111, S112 and S113, the preliminary garment boundary is corrected according to the preset boundary parameters of the men 'S long-sleeve shirts (for example, the proportional relation of parameters such as the length, width, length of sleeves and width of the men' S long-sleeve shirts), and the accurate garment boundary is obtained.
Step S130, dividing a clothing image from a clothing picture or a clothing video according to the accurate clothing boundary;
and according to the obtained accurate clothing boundary, performing image segmentation in each frame of image in the clothing picture or the clothing video to obtain a clothing image.
Step S140, calculating the area of each region surrounded by the boundaries of the divided clothing images so as to carry out smoothing processing and filling processing on the boundaries of the divided clothing images;
specifically, referring to fig. 4, calculating the area of each region surrounded by the boundaries of the clothing image includes the following sub-steps:
step S141, determining a certain number of break points according to the curvature of each section of the boundary, and forming a polygonal image through the break points;
the boundary of the clothing image can enclose closed areas with different sizes, and area calculation is carried out on each closed area. Because the boundary of the clothing image enclosing and forming each region is usually a curve, when calculating the area of each region, a certain number of folding points are determined according to the curvature of each section of the curve, specifically, the corresponding vertex when the curvature is larger than the threshold is determined as the folding point, and then two adjacent folding points are connected to convert each region into a curveAnd changing into a polygonal image. For example: forming w folding points, wherein the coordinates of the folding points are sequentially (c)1,d1)、(c2,d2)、…(cw,dw)。
Step S142, area calculation is carried out on the polygonal image to obtain the area of each area surrounded by the boundary of the clothing image;
specifically, the area H of the polygon image is:
Figure BDA0002763059440000071
wherein w is the number of break points of the polygonal image, (c)k,dk)、(ck+1,dk+1) And k +1 are subscripts of the broken point coordinates c and d to indicate that c and d are coordinates of different broken points, and the values are 1 to w-1.
Specifically, after the area of each region surrounded by the boundary of the clothing image is calculated, the boundary corresponding to the region with the area smaller than the threshold value is removed from the boundary of the clothing image, so that the boundary of the clothing image is smoother, and thus, the miscellaneous points formed by the boundary of the clothing image can be removed.
On the basis, after removing the boundary corresponding to the region with the area smaller than the threshold, filling the region surrounded by the boundary (i.e., the boundary corresponding to the region with the area smaller than the threshold), specifically, selecting a pixel point near the boundary, and filling the region surrounded by the boundary with the characteristics of the pixel point.
S150, identifying the mannequin parts except the clothes in the clothes image, and removing the area occupied by the mannequin parts to form an accurate clothes image;
specifically, the mannequin part in the clothing image is identified by a convex hull technology, for example: and (4) removing the image in the convex hull from the hand of the mannequin, and filling the removed area according to the background color to form a final clothing image.
Example two
Referring to fig. 5, fig. 5 is a schematic diagram of an edge fine matting system for a garment according to an embodiment of the present application;
the application also provides a system for fine edge matting for garments, comprising: a recognition classification module 510, a modification module 520, a segmentation module 530, a processing module 540, and a mannequin removal module 550.
The recognition classification module 510 performs feature recognition on the taken clothing picture or clothing video to classify clothing in the clothing picture or clothing video into a predetermined category.
On the basis, the system for fine edge matting for clothing further comprises a training module 560, and before the method is used, the training module 560 trains the parameters of different types of clothing collected in advance to form a clothing classification model. Specifically, the training module 560 includes: a feature vector set forming module 561, a DBN classification model 562, and a fusion module 563.
The feature vector set forming module 561 forms a feature vector set by pre-collected parameters of different kinds of clothing;
specifically, a feature vector set P ═ { z ═ formed by collecting parameters of different kinds of clothing in advance1、z2......zi......znIn which z is1、z2......zi......znFor each garment parameter set, n is the number of parameter sets, zi={a1,a2,a3,a4,a5......ag},a1,a2,a3,a4,a5,......agRepresenting a parameter of a garment, g being the number of parameters of a garment.
For example: the feature vector set P includes a parameter set of each garment of 1000 half-sleeves, 1000 long-sleeves and 1000 trousers, and the parameter combination of each garment includes parameters such as a garment length, a garment width, an arm length, an arm width, a trousers length and a trousers width. This is, of course, an example, and in practice, clothing will be divided into more detailed categories, such as: the half sleeves are further divided into men's half sleeves, women's half sleeves, seven-part half sleeves, nine-part half sleeves and the like.
The DBN classification model 562 trains the formed feature vector set to obtain different sub-classification models;
the DBN model is a deep learning algorithm, has good capability of evolving along with time, and sets P (z) of feature vectors1、z2......zi......znInputting a DBN classification model, training the DBM classification model by utilizing a feature vector set P to obtain different sub-classification models DtWhere T is 1, 2, 3, … … T, that is, T sub-classification models are obtained.
The fusion module 563 fuses different sub-classification models to obtain a clothing classification model;
in particular, model D is classified for different sub-classestGiven a corresponding weight betatThe classification model Y is obtained by the following formula:
Figure BDA0002763059440000091
when the garment is used, the garment is worn on the mannequin, then the garment and the mannequin are placed into the shooting device, the garment is shot from different angles, and a garment picture or a garment video is shot.
And then, carrying out feature recognition on the shot clothing picture or clothing video, and classifying the clothing in the clothing picture or clothing video into a preset category according to the recognized features.
Specifically, the recognition classification module 510 includes: a grayscale conversion module 511, a stretching module 512, an extraction module 513, and a feature recognition input module 514.
The gray level conversion module 511 converts each frame image in the taken clothing picture or clothing video into a gray level image;
setting the values of three colors R, G, B of each pixel point in each frame of image in the taken clothing picture or clothing video to be the same, so as to perform graying processing on each frame of image in the taken clothing picture or clothing video, specifically, performing graying processing on each frame of image in the taken clothing picture or clothing video by the following formula R ═ G ═ wr ═ R + wg ═ G + wb ═ B, wherein wr, wg and wb are weights of R, G, B respectively.
The stretching module 512 expands the low gray value part in the gray image and compresses the high gray value part in the gray image to stretch the gray image;
specifically, each pixel point in the gray level image is processed by a formula
Figure BDA0002763059440000101
And carrying out transformation, wherein s is the value of the transformed pixel point, r is the original value of the pixel point in the gray image, m is a threshold value and is preferable, m is 0.5, E is a stretching parameter and is preferable, E is 5, and the gray image is stretched through the formula.
The extraction module 513 extracts the boundary of the garment in the stretched image;
specifically, an absolute gradient value M of each pixel point in the stretched image is calculated:
Figure BDA0002763059440000102
and f (x, y) is a two-dimensional function corresponding to each pixel point in the stretched image, x and y are coordinate values of the pixel points, and if the absolute gradient value M (x, y) of the pixel point is greater than a preset threshold value, the pixel point belongs to a boundary point of the garment. All boundary points of the garment in the stretched image are calculated according to the method, and a set formed by all the boundary points of the garment is taken as the boundary of the garment to be extracted.
The feature recognition input module 514 obtains features of the clothing from the extracted clothing boundaries and inputs the features of the clothing into the clothing classification model to classify the clothing into a predetermined classification;
specifically, characteristic parameters such as the length, the width, the sleeve length, the sleeve width, the length of trousers, the width of trousers and the like are obtained according to the extracted clothing boundary, the obtained parameters are input into a clothing classification model Y, and the clothing is classified into one of preset categories such as long sleeves, short sleeves, long skirts, short skirts, long trousers, short trousers and the like through the clothing classification model Y.
The correction module 520 corrects the preliminary clothing boundary extracted from the clothing picture or the clothing video according to the clothing boundary parameters classified into the types so as to obtain an accurate clothing boundary;
for example, after classifying the garment into men's long-sleeve shirts according to the garment classification model Y, preset boundary parameters of the men's long-sleeve shirts (for example, proportional relations of parameters such as the length, width, length, width and the like of the men's long-sleeve shirts) are obtained. And (3) extracting a preliminary clothing boundary from the clothing picture or the clothing video, and then correcting the preliminary clothing boundary according to preset boundary parameters of the men's long-sleeve shirts (for example, the proportional relation of parameters such as the length, the width, the sleeve length and the sleeve width of the men's long-sleeve shirts) to obtain an accurate clothing boundary.
The segmentation module 530 segments the clothing image from the clothing picture or the clothing video according to the accurate clothing boundary;
and according to the obtained accurate clothing boundary, performing image segmentation in each frame of image in the clothing picture or the clothing video to obtain a clothing image.
The processing module 540 calculates the area of each region surrounded by the boundary of the segmented clothing image so as to perform smoothing processing and filling processing on the boundary of the segmented clothing image;
specifically, the processing module 540 includes: a polygon formation module 541 and an area calculation module 542.
The polygon forming module 541 determines a certain number of break points according to the curvature of each segment of the boundary, and forms a polygon image through the break points;
the boundary of the clothing image can enclose closed areas with different sizes, and area calculation is carried out on each closed area. Because the boundary of the garment image enclosing and forming each region is usually a curve, when calculating the area of each region, a certain number of break points are determined according to the curvature of each segment of the curve, specifically, the corresponding vertex when the curvature is greater than the threshold is determined as a break point, and then two adjacent break points are connected to convert each region into a polygonal image. For example:forming w folding points, wherein the coordinates of the folding points are sequentially (c)1,d1)、(c2,d2)、…(cw,dw)。
The area calculation module 542 performs area calculation on the polygonal image to obtain the area of each region surrounded by the boundary of the clothing image;
specifically, the area H of the polygon image is:
Figure BDA0002763059440000111
wherein w is the number of break points of the polygonal image, (c)k,dk)、(ck+1,dk+1) And k +1 are subscripts of the broken point coordinates c and d to indicate that c and d are coordinates of different broken points, and the values are 1 to w-1.
Specifically, after the area of each region surrounded by the boundary of the clothing image is calculated, the boundary corresponding to the region with the area smaller than the threshold value is removed from the boundary of the clothing image, so that the boundary of the clothing image is smoother, and thus, the miscellaneous points formed by the boundary of the clothing image can be removed.
On the basis, after removing the boundary corresponding to the region with the area smaller than the threshold, filling the region surrounded by the boundary (i.e., the boundary corresponding to the region with the area smaller than the threshold), specifically, selecting a pixel point near the boundary, and filling the region surrounded by the boundary with the characteristics of the pixel point.
The mannequin removal module 550 identifies the part of the mannequin other than the garment in the garment image and removes the area occupied by the mannequin part to form an accurate garment image;
specifically, the mannequin part in the clothing image is identified by a convex hull technology, for example: and (4) removing the image in the convex hull from the hand of the mannequin, and filling the removed area according to the background color to form a final clothing image.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (10)

1. A method for fine edge matting of a garment is characterized by comprising the following steps:
step S110, performing characteristic identification on the shot clothing picture or clothing video to classify clothing in the clothing picture or clothing video into a preset category;
step S120, correcting the preliminary clothing boundary extracted from the clothing picture or the clothing video according to the clothing boundary parameters classified into the types so as to obtain an accurate clothing boundary;
and S130, dividing the clothing image from the clothing picture or the clothing video according to the accurate clothing boundary.
2. The method of edge fine matting for a garment according to claim 1, further comprising the steps of:
step S140, calculating the area of each region surrounded by the boundaries of the divided clothing images so as to carry out smoothing processing and filling processing on the boundaries of the divided clothing images;
and S150, identifying the mannequin parts except the clothes in the clothes image, and removing the area occupied by the mannequin parts to form an accurate clothes image.
3. The method for fine edge matting of clothing according to claim 1 or 2, wherein the step of performing feature recognition on the taken clothing picture or clothing video to classify the clothing in the clothing picture or clothing video into a predetermined category comprises the following sub-steps:
step S111, converting each frame image in the shot clothing picture or clothing video into a gray level image;
step S112, expanding the low gray value part in the gray image, and compressing the high gray value part in the gray image to stretch the gray image;
step S113, extracting clothing boundaries from the stretched image;
and step S114, obtaining the characteristics of the clothes from the extracted clothes boundaries, and inputting the characteristics of the clothes into a clothes classification model so as to classify the clothes into a preset category.
4. The method for fine edge matting for clothing according to claim 3, wherein training to form a clothing classification model comprises the sub-steps of:
step S101, collecting parameters of different types of clothes in advance to form a feature vector set;
step S102, inputting the formed feature vector set into a DBN classification model, and training the DBN classification model to obtain different sub-classification models;
and S103, fusing different sub-classification models to obtain a clothing classification model.
5. The method for fine edge matting for clothing according to claim 2, wherein the area of each region enclosed by the border of the clothing image is calculated, comprising the following sub-steps:
step S141, determining a certain number of break points according to the curvature of each segment of the boundary, and forming a polygonal image through the break points;
and S142, carrying out area calculation on the polygonal image to obtain the area of each area surrounded by the boundary of the clothing image.
6. An edge fine matting system for a garment, comprising: the device comprises an identification classification module, a correction module and a segmentation module;
the identification and classification module is used for carrying out characteristic identification on the shot clothing pictures or clothing videos so as to classify clothing in the clothing pictures or clothing videos into a preset category;
the correction module corrects the preliminary clothing boundary extracted from the clothing picture or the clothing video according to the clothing boundary parameters classified into the types so as to obtain an accurate clothing boundary;
the segmentation module segments the clothing image from the clothing picture or the clothing video according to the accurate clothing boundary.
7. The edge fine matting system for a garment according to claim 6, further comprising: the system comprises a processing module and a mannequin removing module;
the processing module calculates the area of each region surrounded by the boundaries of the segmented clothing images so as to carry out smoothing processing and filling processing on the boundaries of the segmented clothing images;
the mannequin removing module identifies mannequin parts except the clothes in the clothes image and removes the area occupied by the mannequin parts to form an accurate clothes image.
8. The edge fine matting system for garments according to claim 6 or 7, wherein the identification categorization module comprises: the device comprises a gray level conversion module, a stretching module, an extraction module and a feature recognition input module;
the gray level conversion module converts each frame image in the shot clothing picture or clothing video into a gray level image;
the stretching module expands the low gray value part in the gray image and compresses the high gray value part in the gray image so as to stretch the gray image;
the extraction module extracts the boundary of the clothing from the stretched image;
the characteristic recognition input module obtains the characteristics of the clothing from the extracted clothing boundaries and inputs the characteristics of the clothing into the clothing classification model so as to classify the clothing into a preset classification.
9. The edge fine matting system for a garment according to claim 8, wherein the training module comprises: the system comprises a feature vector set forming module, a DBN classification model and a fusion module;
the feature vector set forming module collects parameters of different types of clothes in advance to form a feature vector set;
the DBN classification model trains the formed feature vector set to obtain different sub-classification models;
and the fusion module fuses different sub-classification models to obtain a clothing classification model.
10. The edge fine matting system for a garment according to claim 7, wherein the processing module comprises: a polygon forming module and an area calculating module;
the polygon forming module determines a certain number of break points according to the curvature of each section of the boundary, and forms a polygon image through the break points;
and the area calculation module is used for calculating the area of the polygonal image to obtain the area of each area surrounded by the boundary of the clothing image.
CN202011224064.1A 2020-11-05 2020-11-05 Method and system for fine edge matting of clothing Pending CN112330634A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011224064.1A CN112330634A (en) 2020-11-05 2020-11-05 Method and system for fine edge matting of clothing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011224064.1A CN112330634A (en) 2020-11-05 2020-11-05 Method and system for fine edge matting of clothing

Publications (1)

Publication Number Publication Date
CN112330634A true CN112330634A (en) 2021-02-05

Family

ID=74315936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011224064.1A Pending CN112330634A (en) 2020-11-05 2020-11-05 Method and system for fine edge matting of clothing

Country Status (1)

Country Link
CN (1) CN112330634A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220180551A1 (en) * 2020-12-04 2022-06-09 Shopify Inc. System and method for generating recommendations during image capture of a product
CN116486116A (en) * 2023-06-16 2023-07-25 济宁大爱服装有限公司 Machine vision-based method for detecting abnormality of hanging machine for clothing processing

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0865493A (en) * 1994-08-22 1996-03-08 Kyocera Corp Image processor
US20110279475A1 (en) * 2008-12-24 2011-11-17 Sony Computer Entertainment Inc. Image processing device and image processing method
CN103679181A (en) * 2013-11-25 2014-03-26 浙江大学 Machine vision based in-pigsty pig mark recognition method
CN106384126A (en) * 2016-09-07 2017-02-08 东华大学 Clothes pattern identification method based on contour curvature feature points and support vector machine
CN106570856A (en) * 2016-08-31 2017-04-19 天津大学 Common carotid artery intima-media thickness measuring device and method combining level set segmentation and dynamic programming
CN106911904A (en) * 2015-12-17 2017-06-30 通用电气公司 Image processing method, image processing system and imaging system
US20180268549A1 (en) * 2014-06-12 2018-09-20 Koninklijke Philips N.V. Optimization of parameters for segmenting an image
CN108711161A (en) * 2018-06-08 2018-10-26 Oppo广东移动通信有限公司 A kind of image partition method, image segmentation device and electronic equipment
CN109145875A (en) * 2018-09-28 2019-01-04 上海阅面网络科技有限公司 Black surround glasses minimizing technology and device in a kind of facial image
US20190206052A1 (en) * 2017-12-29 2019-07-04 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Carpal segmentation and recognition method and system, terminal and readable storage medium
CN110113510A (en) * 2019-05-27 2019-08-09 杭州国翌科技有限公司 A kind of real time video image Enhancement Method and high speed camera system
CN110838131A (en) * 2019-11-04 2020-02-25 网易(杭州)网络有限公司 Method and device for realizing automatic cutout, electronic equipment and medium
CN111696063A (en) * 2020-06-15 2020-09-22 恒信东方文化股份有限公司 Repairing method and system for clothes multi-angle shot pictures

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0865493A (en) * 1994-08-22 1996-03-08 Kyocera Corp Image processor
US20110279475A1 (en) * 2008-12-24 2011-11-17 Sony Computer Entertainment Inc. Image processing device and image processing method
CN103679181A (en) * 2013-11-25 2014-03-26 浙江大学 Machine vision based in-pigsty pig mark recognition method
US20180268549A1 (en) * 2014-06-12 2018-09-20 Koninklijke Philips N.V. Optimization of parameters for segmenting an image
CN106911904A (en) * 2015-12-17 2017-06-30 通用电气公司 Image processing method, image processing system and imaging system
CN106570856A (en) * 2016-08-31 2017-04-19 天津大学 Common carotid artery intima-media thickness measuring device and method combining level set segmentation and dynamic programming
CN106384126A (en) * 2016-09-07 2017-02-08 东华大学 Clothes pattern identification method based on contour curvature feature points and support vector machine
US20190206052A1 (en) * 2017-12-29 2019-07-04 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Carpal segmentation and recognition method and system, terminal and readable storage medium
CN108711161A (en) * 2018-06-08 2018-10-26 Oppo广东移动通信有限公司 A kind of image partition method, image segmentation device and electronic equipment
CN109145875A (en) * 2018-09-28 2019-01-04 上海阅面网络科技有限公司 Black surround glasses minimizing technology and device in a kind of facial image
CN110113510A (en) * 2019-05-27 2019-08-09 杭州国翌科技有限公司 A kind of real time video image Enhancement Method and high speed camera system
CN110838131A (en) * 2019-11-04 2020-02-25 网易(杭州)网络有限公司 Method and device for realizing automatic cutout, electronic equipment and medium
CN111696063A (en) * 2020-06-15 2020-09-22 恒信东方文化股份有限公司 Repairing method and system for clothes multi-angle shot pictures

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HOU A-LIN等: "Garment Image Retrieval Based on Multi-features", 2010 INTERNATIONAL CONFERENCE ON COMPUTER, MECHATRONICS, CONTROL AND ELECTRONIC ENGINEERING, vol. 6, pages 194 - 197, XP031780960 *
安立新: "服装款式图提取及其模式识别的研究", 中国博士学位论文全文数据库(电子期刊)信息科技辑, no. 07, pages 138 - 25 *
阮秋琦,阮宇智译;(美)拉斐尔·C.冈萨雷斯,理查德·E.伍兹: "国外电子书与通信教材系列 数字图像处理 第4版", 31 May 2020, 电子工业出版社, pages: 75 - 136 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220180551A1 (en) * 2020-12-04 2022-06-09 Shopify Inc. System and method for generating recommendations during image capture of a product
US11645776B2 (en) * 2020-12-04 2023-05-09 Shopify Inc. System and method for generating recommendations during image capture of a product
US11967105B2 (en) 2020-12-04 2024-04-23 Shopify Inc. System and method for generating recommendations during image capture of a product
CN116486116A (en) * 2023-06-16 2023-07-25 济宁大爱服装有限公司 Machine vision-based method for detecting abnormality of hanging machine for clothing processing
CN116486116B (en) * 2023-06-16 2023-08-29 济宁大爱服装有限公司 Machine vision-based method for detecting abnormality of hanging machine for clothing processing

Similar Documents

Publication Publication Date Title
CN108932493B (en) Facial skin quality evaluation method
Ghimire et al. A robust face detection method based on skin color and edges
CN109871750B (en) Gait recognition method based on skeleton diagram sequence abnormal joint repair
CN109670430A (en) A kind of face vivo identification method of the multiple Classifiers Combination based on deep learning
CN113191216B (en) Multi-user real-time action recognition method and system based on posture recognition and C3D network
CN112330634A (en) Method and system for fine edge matting of clothing
CN113469144A (en) Video-based pedestrian gender and age identification method and model
Weerasekera et al. Robust asl fingerspelling recognition using local binary patterns and geometric features
CN113724273A (en) Edge light and shadow fusion method based on neural network regional target segmentation
Vasconcelos et al. Methods to automatically build point distribution models for objects like hand palms and faces represented in images
CN112487926A (en) Scenic spot feeding behavior identification method based on space-time diagram convolutional network
JP2003044853A (en) Face detection device, face pose detection device, partial image extraction device and methods for the devices
Vasconcelos et al. Methodologies to build automatic point distribution models for faces represented in images
CN109165551B (en) Expression recognition method for adaptively weighting and fusing significance structure tensor and LBP characteristics
Gowda Face verification across age progression using facial feature extraction
Bourbakis et al. Skin-based face detection-extraction and recognition of facial expressions
Saranya et al. An approach towards ear feature extraction for human identification
CN114973384A (en) Electronic face photo collection method based on key point and visual salient target detection
CN106815848A (en) Portrait background separation and contour extraction method based on grubcut and artificial intelligence
CN113378799A (en) Behavior recognition method and system based on target detection and attitude detection framework
CN111696063B (en) Repairing method and system for clothing multi-angle shot pictures
CN110309554B (en) Video human body three-dimensional reconstruction method and device based on garment modeling and simulation
JP7253967B2 (en) Object matching device, object matching system, object matching method, and computer program
Liu A deep learning method for suit detection in images
Ghimire et al. A lighting insensitive face detection method on color images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination