CN112116000A - Image identification method for clothing type - Google Patents
Image identification method for clothing type Download PDFInfo
- Publication number
- CN112116000A CN112116000A CN202010971943.4A CN202010971943A CN112116000A CN 112116000 A CN112116000 A CN 112116000A CN 202010971943 A CN202010971943 A CN 202010971943A CN 112116000 A CN112116000 A CN 112116000A
- Authority
- CN
- China
- Prior art keywords
- image
- neural network
- network model
- clothing
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000003062 neural network model Methods 0.000 claims abstract description 19
- 238000013528 artificial neural network Methods 0.000 claims abstract description 12
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 238000012545 processing Methods 0.000 claims abstract description 5
- 238000005457 optimization Methods 0.000 claims abstract description 4
- 238000010606 normalization Methods 0.000 claims abstract description 3
- 230000004927 fusion Effects 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 4
- 238000003708 edge detection Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000012567 pattern recognition method Methods 0.000 claims 1
- 238000004364 calculation method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The image identification method for the clothing type comprises the following steps: intercepting a picture clothing area; image preprocessing: detecting the edge of the image, and carrying out normalization processing on the edge information of the image and the color pixels of the image; calculating an artificial neural network model: fusing and classifying image information and edge information in different neural network models; performing semi-automatic iterative optimization on the artificial neural network model, and finally optimizing the network model; and applying the actual image by using the optimal neural network model, and obtaining the clothing type. The invention provides a method for identifying clothing categories, which is characterized in that a clothing region is intercepted, picture edge information is added on the basis of color information as assistance, the edge information is fully utilized in multiple stages of an artificial neural network model, and the model is continuously optimized through a semi-automatic iteration process, so that the identification accuracy is improved.
Description
Technical Field
The invention relates to the technical field of image recognition, in particular to an image recognition method for clothing types.
Background
With the development of artificial intelligence technology, shopping centers are beginning to be transformed digitally. It is desirable for shopping malls to have more knowledge of store customers in order to provide more accurate services.
At present, image processing is carried out on a general data set, and some consumption trends of people are determined from the image processing, but at present, no image recognition technology specially used for carrying out visual feature calculation on the styles and types of clothes of the people exists.
In addition, the problems of low accuracy, low model convergence speed and the like exist in the image identification aiming at the clothing types at present,
disclosure of Invention
The invention aims to provide an image identification method for clothing types, and solves the technical problems that a shopping center in the prior art hopes to know more clothing requirements of customers, and the image identification for the clothing types at present has low accuracy and low model convergence speed.
The invention discloses an image identification method for clothing types.
The image identification method for the clothing type comprises the following steps:
step S1: intercepting a picture clothing area;
step S2: image preprocessing: detecting the edge of the image, and carrying out normalization processing on the edge information of the image and the color pixels of the image;
step S3: calculating an artificial neural network model: fusing and classifying image information and edge information in different neural network models;
step S4: performing semi-automatic iterative optimization on the artificial neural network model, and finally optimizing the network model;
step S5: and applying the actual image by using the optimal neural network model, and obtaining the clothing type.
The invention provides a set of complete solution aiming at garment type identification, which is characterized in that a garment region is efficiently intercepted by using different strategies in different scenes, edge information extracted aiming at garment types is added as assistance on the basis of color information, the edge information is fully utilized in multiple stages of an artificial neural network model, and the model is continuously optimized through a set of semi-automatic iteration flow, so that the accuracy of image identification aiming at garment styles is improved.
The invention meets the requirement that the shopping center hopes to know more clothes of the customer.
Drawings
FIG. 1 is a schematic overall flow diagram of the present invention;
fig. 2 is a flowchart of step S1;
fig. 3 is a flowchart of step S2;
fig. 4 is a flowchart of step S3;
fig. 5 is a flowchart of step S4.
Detailed Description
The invention will be further elucidated and described with reference to the embodiments and drawings of the specification:
referring to fig. 1, the invention discloses an image recognition method for clothing types, comprising the following steps:
step S1: and (6) intercepting the clothing area of the picture. The method for intercepting the clothing region comprises two methods of obtaining the clothing position according to the position calculation of the face and obtaining the clothing position by carrying out target detection on the image.
The intercepting process is as shown in fig. 2, when the user logs in, the face recognition is carried out on the user, and under the use scene, the system knows the face position information of the user, and then the clothing position information is calculated according to the face position information; when the pictures come from the internet collected data or other scenes and the system does not have face position information of the user, the target detection model aiming at the clothing is used to obtain the clothing region.
Step S2: image preprocessing: detecting the edge of the image, and normalizing the edge information and the color pixels of the image. Edge detection of the image comprises: and extracting the edge information of the image through a scharr operator, and respectively normalizing the edge information and the color information of the image.
Specifically, the method adds edge detection calculation in image preprocessing, adds edge information of a picture as a second input on the basis of inputting a color picture, and helps a model to better extract style-related features contained in a clothing image by providing the edge information. As shown in fig. 3, the preprocessing flow extracts the edge information of the image through a scharr operator, normalizes the edge information and the color information of the image, and inputs the normalized edge information and the color information into the clothing style identification model. Compared with other edge extraction operators such as canny, the edge information extracted by using the scharr operator has a better effect, and the model can have higher accuracy and faster convergence.
Referring to fig. 4, step S3: calculating an artificial neural network model: and fusing and classifying the image information and the edge information in different neural network models.
The step S3 includes:
step S31, extracting the characteristics of the image color information by using the volume and the network;
step S32, fusing the edge information and the image color information;
step S33: extracting bottom layer characteristics from the edge information and the image color information through a bottom layer convolution network, and fusing the bottom layer characteristics;
step S34: respectively extracting the color features and the edge features through an inverse residual bottleneck convolution and a convolution network to obtain high-level features;
step S35: and performing feature fusion on the high-level features, performing final feature extraction through top-level convolution, performing global average pooling and dimensionality reduction on the final features, and classifying through a full-connection layer to obtain the clothing category.
The edge information can be fused with the color information in three stages, and the edge information can be directly spliced to the color information in the input stage to form 4-channel input data. By adding edge information in the input stage, the model is not different in performance on the training set, but higher accuracy is brought on the verification set.
The color image and the edge information extract the bottom layer characteristics through the bottom layer convolution layer and ensure the same characteristic size, then the information fusion of the bottom layer characteristics is carried out in an addition mode, and the effect of carrying out the characteristic fusion through the addition is better than the effect of splicing and splicing with channel fusion.
After the bottom layer features are added, the color features and the edge features are further extracted through an inverse residual bottleneck convolution module and a convolution module respectively to obtain the high-level features with the same size and higher abstraction performance. The convolution module used for extracting the features of the edge features is different from the inverse residual bottleneck convolution module used for extracting the features of the color features, and experiments show that the edge information does not need a complex convolution module to extract the features any more, so that the use of a relatively simple convolution module is more favorable for exerting the attention mechanism of the edge information.
And performing last feature fusion on the color features and the edge features of the high layer in an addition mode, performing last feature extraction on the top convolution layer, performing feature dimensionality reduction on the top convolution layer in a global average pooling layer, and classifying the top convolution layer in a full-link layer.
Step S4: and carrying out semi-automatic iterative optimization on the artificial neural network model, and finally optimizing the network model.
Step S41: calculating a new image through a neural network model, obtaining the confidence coefficient of the clothing type, comparing the obtained confidence coefficient with a set threshold value, judging the reliability of the judgment result of the neural network model, and adding the new image into data according to the corresponding type;
step S42: manually performing spot check on the calculated image data with high confidence level according to a proportion, and adjusting a set confidence level threshold value according to a spot check result;
step S43: and when the confidence coefficient of the type of the newly-input image calculated by the neural network model is smaller than a preset confidence coefficient threshold value, considering that the judgment result of the neural network model is not credible, manually determining the correct type corresponding to the image, storing the manually-determined image as newly-added data, updating the neural network model, and continuously iterating until the accuracy rate of the neural network model for identifying the garment type in the image is continuously improved.
The number and scale of the pictures in the data set can influence the accuracy and generalization capability of the artificial neural network model to a great extent, so that the model is continuously optimized by using a set of semi-automatic iterative process, and the process is shown in fig. 5. And (4) calculating after the new image is input into the model to obtain the confidence degree of the clothing category. When the confidence is higher than the set threshold standard, the judgment result of the model is considered to be credible, and the new image is added into the data of the corresponding category. In order to ensure the quality of the newly added data, error data with high confidence level needs to be avoided as much as possible, so that the high confidence level image is spot checked according to a certain proportion to ensure that the proportion of the high confidence level error score is within an acceptable range, and otherwise, a confidence level threshold value is adjusted. And when the confidence given by the model is lower than the preset threshold standard, the judgment result of the model is considered to be unreliable, the correct type of the image is determined through manual verification, and the image after the manual verification can be used as new data to update the model. Through continuous iteration, the accuracy and generalization capability of the garment style identification model can be continuously improved.
The invention provides a set of complete solution aiming at garment type identification, which is characterized in that a garment region is efficiently intercepted by using different strategies in different scenes, edge information extracted aiming at garment types is added as assistance on the basis of color information, the edge information is fully utilized in multiple stages of an artificial neural network model, and the model is continuously optimized through a set of semi-automatic iteration flow, so that the accuracy of image identification aiming at garment styles is improved.
The invention meets the requirement that the shopping center hopes to know more clothes of the customer.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (5)
1. The image identification method for the clothing type is characterized by comprising the following steps of:
step S1: intercepting a picture clothing area;
step S2: image preprocessing: detecting the edge of the image, and carrying out normalization processing on the edge information of the image and the color pixels of the image;
step S3: calculating an artificial neural network model: fusing and classifying image information and edge information in different neural network models;
step S4: performing semi-automatic iterative optimization on the artificial neural network model, and finally optimizing the network model;
step S5: and applying the actual image by using the optimal neural network model, and obtaining the clothing type.
2. The image recognition method for the garment type according to claim 1, wherein the picture garment region truncation in the step S1 includes: and calculating according to the position of the face to obtain the position of the garment, and detecting the target of the image to obtain the position of the garment.
3. The image recognition method for a garment type of claim 1, wherein the edge detection of the image comprises: and extracting the edge information of the image through a scharr operator, and respectively normalizing the edge information and the color information of the image.
4. The image recognition method for a garment type according to claim 1, wherein the step S3 includes:
step S31, extracting the characteristics of the image color information by using the volume and the network;
step S32, fusing the edge information and the image color information;
step S33: extracting bottom layer characteristics from the edge information and the image color information through a bottom layer convolution network, and fusing the bottom layer characteristics;
step S34: respectively extracting the color features and the edge features through an inverse residual bottleneck convolution and a convolution network to obtain high-level features;
step S35: and performing feature fusion on the high-level features, performing final feature extraction through top-level convolution, performing global average pooling and dimensionality reduction on the final features, and classifying through a full-connection layer to obtain the clothing category.
5. The pattern recognition method for the type of clothing according to claim 4, wherein the step S4 includes:
step S41: calculating a new image through a neural network model, obtaining the confidence coefficient of the clothing type, comparing the obtained confidence coefficient with a set threshold value, judging the reliability of the judgment result of the neural network model, and adding the new image into data according to the corresponding type;
step S42: manually performing spot check on the calculated image data with high confidence level according to a proportion, and adjusting a set confidence level threshold value according to a spot check result;
step S43: and when the confidence coefficient of the type of the newly-input image calculated by the neural network model is smaller than a preset confidence coefficient threshold value, considering that the judgment result of the neural network model is not credible, manually determining the correct type corresponding to the image, storing the manually-determined image as newly-added data, updating the neural network model, and continuously iterating until the accuracy rate of the neural network model for identifying the garment type in the image is continuously improved.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010971943.4A CN112116000A (en) | 2020-09-16 | 2020-09-16 | Image identification method for clothing type |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010971943.4A CN112116000A (en) | 2020-09-16 | 2020-09-16 | Image identification method for clothing type |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112116000A true CN112116000A (en) | 2020-12-22 |
Family
ID=73803489
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010971943.4A Pending CN112116000A (en) | 2020-09-16 | 2020-09-16 | Image identification method for clothing type |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112116000A (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110211754A1 (en) * | 2010-03-01 | 2011-09-01 | Primesense Ltd. | Tracking body parts by combined color image and depth processing |
CN102521565A (en) * | 2011-11-23 | 2012-06-27 | 浙江晨鹰科技有限公司 | Garment identification method and system for low-resolution video |
JP2015149008A (en) * | 2014-02-07 | 2015-08-20 | 伊藤 庸一郎 | authentication system and authentication method |
CN107220949A (en) * | 2017-05-27 | 2017-09-29 | 安徽大学 | The self adaptive elimination method of moving vehicle shade in highway monitoring video |
CN107330451A (en) * | 2017-06-16 | 2017-11-07 | 西交利物浦大学 | Clothes attribute retrieval method based on depth convolutional neural networks |
CN108764062A (en) * | 2018-05-07 | 2018-11-06 | 西安工程大学 | A kind of clothing cutting plate recognition methods of view-based access control model |
CN110414411A (en) * | 2019-07-24 | 2019-11-05 | 中国人民解放军战略支援部队航天工程大学 | The sea ship candidate region detection method of view-based access control model conspicuousness |
CN110674884A (en) * | 2019-09-30 | 2020-01-10 | 山东浪潮人工智能研究院有限公司 | Image identification method based on feature fusion |
CN110825899A (en) * | 2019-09-18 | 2020-02-21 | 武汉纺织大学 | Clothing image retrieval method integrating color features and residual network depth features |
CN110880165A (en) * | 2019-10-15 | 2020-03-13 | 杭州电子科技大学 | Image defogging method based on contour and color feature fusion coding |
CN111199248A (en) * | 2019-12-26 | 2020-05-26 | 东北林业大学 | Clothing attribute detection method based on deep learning target detection algorithm |
-
2020
- 2020-09-16 CN CN202010971943.4A patent/CN112116000A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110211754A1 (en) * | 2010-03-01 | 2011-09-01 | Primesense Ltd. | Tracking body parts by combined color image and depth processing |
CN102521565A (en) * | 2011-11-23 | 2012-06-27 | 浙江晨鹰科技有限公司 | Garment identification method and system for low-resolution video |
JP2015149008A (en) * | 2014-02-07 | 2015-08-20 | 伊藤 庸一郎 | authentication system and authentication method |
CN107220949A (en) * | 2017-05-27 | 2017-09-29 | 安徽大学 | The self adaptive elimination method of moving vehicle shade in highway monitoring video |
CN107330451A (en) * | 2017-06-16 | 2017-11-07 | 西交利物浦大学 | Clothes attribute retrieval method based on depth convolutional neural networks |
CN108764062A (en) * | 2018-05-07 | 2018-11-06 | 西安工程大学 | A kind of clothing cutting plate recognition methods of view-based access control model |
CN110414411A (en) * | 2019-07-24 | 2019-11-05 | 中国人民解放军战略支援部队航天工程大学 | The sea ship candidate region detection method of view-based access control model conspicuousness |
CN110825899A (en) * | 2019-09-18 | 2020-02-21 | 武汉纺织大学 | Clothing image retrieval method integrating color features and residual network depth features |
CN110674884A (en) * | 2019-09-30 | 2020-01-10 | 山东浪潮人工智能研究院有限公司 | Image identification method based on feature fusion |
CN110880165A (en) * | 2019-10-15 | 2020-03-13 | 杭州电子科技大学 | Image defogging method based on contour and color feature fusion coding |
CN111199248A (en) * | 2019-12-26 | 2020-05-26 | 东北林业大学 | Clothing attribute detection method based on deep learning target detection algorithm |
Non-Patent Citations (1)
Title |
---|
赵伟丽;: "基于多特征融合的少数民族服饰图像检索", 《山东工业技术》, no. 01, 1 January 2017 (2017-01-01), pages 293 - 294 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110533097B (en) | Image definition recognition method and device, electronic equipment and storage medium | |
US9129191B2 (en) | Semantic object selection | |
WO2019120115A1 (en) | Facial recognition method, apparatus, and computer apparatus | |
CN109165645B (en) | Image processing method and device and related equipment | |
CN108416902B (en) | Real-time object identification method and device based on difference identification | |
WO2021174819A1 (en) | Face occlusion detection method and system | |
CN110569878A (en) | Photograph background similarity clustering method based on convolutional neural network and computer | |
CN110532970B (en) | Age and gender attribute analysis method, system, equipment and medium for 2D images of human faces | |
CN113160192A (en) | Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background | |
WO2018121287A1 (en) | Target re-identification method and device | |
CN112529026A (en) | Method for providing AI model, AI platform, computing device and storage medium | |
CN112801008A (en) | Pedestrian re-identification method and device, electronic equipment and readable storage medium | |
CN113449704B (en) | Face recognition model training method and device, electronic equipment and storage medium | |
CN111178161A (en) | Vehicle tracking method and system based on FCOS | |
CN111191662B (en) | Image feature extraction method, device, equipment, medium and object matching method | |
CN112417947B (en) | Method and device for optimizing key point detection model and detecting face key points | |
CN114511452B (en) | Remote sensing image retrieval method integrating multi-scale cavity convolution and triplet attention | |
CN107992807A (en) | A kind of face identification method and device based on CNN models | |
CN111275694B (en) | Attention mechanism guided progressive human body division analysis system and method | |
CN112149601A (en) | Occlusion-compatible face attribute identification method and device and electronic equipment | |
CN111967399A (en) | Improved fast RCNN behavior identification method | |
CN117115614B (en) | Object identification method, device, equipment and storage medium for outdoor image | |
WO2017124336A1 (en) | Method and system for adapting deep model for object representation from source domain to target domain | |
CN113706550A (en) | Image scene recognition and model training method and device and computer equipment | |
US20220292587A1 (en) | Method and apparatus for displaying product review information, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |