CN108109055B - Cross-scene clothing retrieval method based on image rendering - Google Patents

Cross-scene clothing retrieval method based on image rendering Download PDF

Info

Publication number
CN108109055B
CN108109055B CN201810015943.XA CN201810015943A CN108109055B CN 108109055 B CN108109055 B CN 108109055B CN 201810015943 A CN201810015943 A CN 201810015943A CN 108109055 B CN108109055 B CN 108109055B
Authority
CN
China
Prior art keywords
clothing
retrieval
image
scene
style
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810015943.XA
Other languages
Chinese (zh)
Other versions
CN108109055A (en
Inventor
李宗民
李冠林
李妍特
刘玉杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum East China
Original Assignee
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum East China filed Critical China University of Petroleum East China
Priority to CN201810015943.XA priority Critical patent/CN108109055B/en
Publication of CN108109055A publication Critical patent/CN108109055A/en
Application granted granted Critical
Publication of CN108109055B publication Critical patent/CN108109055B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0623Item investigation
    • G06Q30/0625Directed, with specific intent or strategy
    • G06Q30/0627Directed, with specific intent or strategy using item specifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Computer Graphics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a cross-scene clothing retrieval method based on image rendering, which comprises the following steps: a, eliminating the difference between the clothing commodity image and the daily clothing image, and rendering the clothing commodity image. b, classifying the fine-grained styles of the clothes, and determining local areas of the clothes by utilizing attitude estimation. Then, classifying and learning by sparse coding aiming at different local clothing regions, and correspondingly obtaining corresponding clothing classifier identification models; and c, clothing retrieval, namely extracting depth features from the clothing trunk part to perform preliminary clothing retrieval, performing attribute identification by using a style classifier obtained by training, and performing rearrangement optimization on a retrieval result. The invention solves the problem of cross-domain clothing retrieval, and has high retrieval and style identification precision.

Description

Cross-scene clothing retrieval method based on image rendering
Technical Field
The invention belongs to the field of computer vision, relates to an important application technology in clothing retrieval, and particularly relates to a cross-scene clothing retrieval method based on machine learning.
Background
With the rapid development of internet information technology, online shopping has become a new trend and trend in the modern society, and retail mode has shifted from traditional shopping channels to online sales. The clothes are very suitable for network display and marketing modes due to the moisture, low price and wide consumer groups, and the electronic commerce of the clothes is unprecedentedly developed. The number of online clothing stores is increasing, the clothing data on the network is rapidly expanded, and the demand for directional clothing retrieval in a large amount of clothing data is more and more urgent. The clothing searching technology can help consumers to find the clothing of the self-mental apparatus from the Internet.
The online clothing retrieval has great commercial value, but the accurate clothing retrieval also has great difficulty, and the following reasons mainly exist: 1. the clothing image has various attributes, such as various materials, complex textures, different colors and the like. 2. The background of the daily garment image is complex and is easily influenced by shielding, illumination and the like. 3. The clothes are non-rigid objects, are easy to deform, have different postures and have great influence on the clothes. 4. The number of the on-line clothes is large, how to search the clothes quickly, and the clothes searching efficiency is improved.
The clothing image retrieval mode has become a very important research direction in the image retrieval field, and a plurality of organizations at home and abroad have been put into the clothing retrieval research, so that the clothing image retrieval method has good effects:
the original face water of the university of singapore national stand becomes a machine learning and vision team with professor to be led, and is always in the leading position of clothing retrieval. The group of bangs puts forward a concept of cross-scene clothing and a corresponding solution for the first time based on cross-scene clothing retrieval and clothing style identification. The cross-scene clothing retrieval means that the clothing image of the query and the clothing image in the database belong to different scenes. The query clothing image is shot in daily life and has a complex background. The images of the related clothing articles searched by the user on the internet are generally shot under a specific environment and generally have a pure background. In addition, the mannequin in the image of the item of clothing typically has several fixed professional poses. However, the photographing posture of people in daily life is flexible and changeable. All of the above factors make conventional content-based image retrieval no longer suitable for garment image retrieval. The team also provides a magic wardrobe, a clothing recommendation system aiming at different occasions is realized, and a foundation for clothing recommendation is laid. At present, the team also successfully applies the deep learning technology to the clothing retrieval system, and a very good effect is achieved. The university of california and Ebay corporation collaborate to jointly develop fine-grained garment style identification and retrieval, so that fine-grained retrieval of garment images becomes possible. In the domestic research institution, the Luhan Qing-Dai team of the Chinese academy of Automation research institute for pattern recognition in the national key laboratory also obtains an uncommon achievement in the field of clothing retrieval. In addition, research and development of some content-based image retrieval systems have been carried out in the university of qinghua, the university of fudan, the university of shanghai traffic, and the like, and although not for clothing images, they contribute to the development of clothing image retrieval techniques.
At present, the search modes of the clothing can be divided into two types, namely search based on classified target keywords and search based on image content. Because the clothing image contains various complex attributes and changes at multiple ends, the clothing is difficult to be accurately described by texts. Image content based retrieval is an important approach to accurate garment retrieval.
In early content-based clothing retrieval, most of the work was to determine the face position by face recognition and approximate clothing position and area based on the face position and size. Almost all the clothes retrieval only focuses on specific categories, and the universality is poor. And (4) generation. A universal clothing retrieval framework is proposed only in recent years, and clothing regions are determined mainly through human body posture estimation or bone point extraction. In the thesis, a human body region is obtained by using human body posture estimation, feature retrieval is extracted from the body part of the human body, then the attributes of the clothing are learned by using local key regions, optimization is carried out by using the attributes on the basis of retrieval, cross-scene clothing retrieval is preliminarily realized, but the cross-domain problem still exists in the training process of the style classifier. In recent years, with the development and development of deep learning, deep learning has been well applied to the fields of image classification, object recognition, image retrieval, and the like. The deep neural network extracts image abstract features of different levels from shallow to deep through a multilayer convolutional network, vividly expresses the image and solves the problem of nonlinear image expression. Many scholars, trying to apply deep learning to the field of garment classification and retrieval, have achieved good results. However, the depth feature has a high dimension, is not suitable for retrieval under large data, and cannot solve the problem of clothing cross-scene retrieval. In the existing method, the clothing is divided from the complex background by clothing division, but the background of the daily clothing image is complex and changeable, the human body posture is very flexible, and the clothing area is difficult to accurately divide.
Along with the continuous development of electronic commerce in China, the requirement for clothing retrieval is higher and higher, so that it is necessary to make an accurate cross-scene clothing image retrieval method.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a cross-scene clothing retrieval method based on image rendering. The method is based on the depth characteristics of the clothing images, the cross-scene retrieval problem is solved by rendering the clothing commodity images with single background light and adding the background, the precision of clothing retrieval and fine-grained clothing style identification can be fully improved, and the method has positive significance for perfecting a clothing retrieval system.
The technical solution is as follows:
a cross-scene clothing retrieval method based on image rendering comprises the following steps:
a, eliminating the difference between the clothing and the daily clothing image in the clothing commodity database. Due to the fact that the daily clothing images are complex in scene illumination and random in human body posture, the online clothing commodity images are pure in background and the model posture is fixed. Therefore, the clothing commodity image is rendered, and the daily scene background is superimposed;
b, garment style learning and training. And carrying out human body posture estimation on the clothing commodity image to determine a local clothing area patch. Firstly, a patch of a local area of the garment is normalized to be a 48 gamma 48 patch, and then RGB color features and HOG features are extracted to jointly represent the patch of the garment. And training and learning according to the marked clothing attributes to obtain corresponding clothing style identification. Then, the color and shape characteristics of the local regions of the clothing are extracted to jointly represent the clothing characteristics. Performing classified learning by sparse coding aiming at different local clothing regions, and correspondingly obtaining corresponding clothing style identification models;
and c, clothing retrieval. Firstly, determining a human body trunk area of a clothing commodity image according to human body posture estimation, and extracting depth features to represent the clothing trunk part by using a VGG model trained in ImageNet. When a daily clothing image is input to search for similar clothing commodity images, human body posture estimation is carried out on the daily clothing images, a trunk area is determined, and depth features are extracted. Then, the distance between the features of the input image and the features of the individual images in the database is compared and sorted.
And d, optimizing the clothing retrieval result. And identifying the style of the input clothing image, and rearranging and optimizing the retrieval result according to the style to obtain the final clothing retrieval result.
In the step a, different light rendering is carried out on the clothing images in the commodity database to simulate different illumination in daily life. Because the images in the commodity database generally have single pure backgrounds, the human body regions can be segmented by utilizing the significance and are superposed in different daily scene images, the difference between the human body regions and the daily clothing images is eliminated, and the database is expanded.
In the step b, in order to train the clothing style recognizer, the text is manually marked with 11 attributes, colors, textures, materials, front parts, collars, sleeve lengths, trouser lengths, skirt shapes, ages and sexes related to the clothing style, and whether ties are tied or not. For each style attribute of the clothing, a corresponding classifier needs to be trained, and 11 types are counted.
In the step c, the VGG16 model trained on the ImageNet data set is used for extracting the depth feature representation clothing, and the distance between the features of the input image and the features of the individual images in the database is the Euclidean distance.
And d, reordering according to the attribute corresponding condition on the basis of the initial retrieval, comparing 11 attributes, and removing the clothes which do not correspond to the attributes.
The invention has the following beneficial technical effects: the method eliminates the difference between the daily clothing image and the clothing commodity image, solves the problems of cross-scene clothing retrieval and fine-grained style identification, is not influenced by the posture of a human body, and ensures the accuracy of the cross-scene clothing retrieval.
Drawings
The invention will be further described with reference to the following detailed description and accompanying drawings:
FIG. 1 is a block diagram illustrating the overall flow of an embodiment of the present invention.
Fig. 2 is a schematic view of a sample garment style identification process according to the present invention.
FIG. 3 is a schematic diagram of a VGG model in the present invention.
Detailed Description
With reference to fig. 1, 2 and 3, the basic idea of the present invention is to divide the whole garment search task into three parts according to the actual situation of daily garment search. Firstly, before retrieving the daily clothing images, the images are rendered through different illumination, the background of daily life is superposed, the difference between the clothing commodity images and the daily clothing images in the database is eliminated, and meanwhile, the database can be expanded. And then, determining a clothing local area and extracting clothing local area characteristics by utilizing attitude estimation, and performing learning training through sparse coding to obtain a corresponding clothing style recognizer so as to optimize a retrieval result in the subsequent clothing retrieval process. In the retrieval stage, depth features are extracted from the body part of the garment to perform preliminary garment retrieval, then the style classifier obtained by training is used for attribute identification, and rearrangement optimization is performed on the retrieval result according to the attributes to obtain the final garment retrieval result. The method can adapt to the retrieval and style identification scenes of the clothes across scenes, and improves the retrieval precision to a considerable extent.
For a better understanding of the present invention, some of the abbreviations involved are defined (interpreted) as:
HOG: histogram of directional gradients
RGB: RGB color histogram
ImageNet: containing about 120 million training images, 5 million verification images, and 10 million test images, into 1000 different categories of data sets.
VGG model: the VGG model is a depth model framework and comprises 5 combined convolution layers, 2 layers of full convolution image features and one layer of full convolution class features.
The method specifically comprises the following three aspects:
(1) the difference between the clothing commodity image and the daily clothing image in the database is eliminated, the difference between the clothing commodity image and the daily regular clothing image is eliminated by adopting a method of rendering and overlaying a background, and the flow is shown in figure 2.
(2) The method comprises the steps of learning a recognition model by using relevant features of a description image based on a sparse coding garment style learning algorithm, and outputting probability values of garment styles belonging to each type to corresponding to local regions of the garment to be recognized in a distinguishing process.
(3) And (5) clothing retrieval. In the retrieval stage, depth features of the body part of the garment are extracted for retrieval, then the style of the garment image is identified, and rearrangement optimization is carried out on the retrieval result according to the style to obtain a final garment retrieval result.
The invention is mainly realized by the following steps:
the method comprises the steps of rendering the clothing commodity image, namely segmenting a human body from the clothing commodity image in the database, rendering by using different illumination, superimposing daily life backgrounds, generating a series of clothing images with different illumination and different backgrounds, eliminating the difference between the clothing commodity image in the database and the daily clothing image, and expanding the database.
In the above steps, since the clothing commodity image has a simple background, the human body area can be divided by using a saliency method according to image color comparison and central prior, and the background of daily life is superimposed by simply rendering with different light rays, so that the difference between the clothing commodity image and the daily clothing image in the database can be eliminated, and the accurate retrieval and style identification of the cross-scene clothing becomes possible.
The method comprises the steps of garment style learning and training, namely firstly, determining a garment local area by utilizing human body posture estimation, and extracting the color and shape characteristics of the garment local area. And aiming at different local clothing regions, learning and training are carried out through sparse coding, and corresponding clothing style identification models are correspondingly obtained, so that clothing style classification is carried out.
In the above steps, in order to determine each local region of the human body by using the posture estimation, models of the upper body and the lower body, 19 pieces of upper body models and 11 pieces of lower body models are learned on line. For better recognition, all human body part patches are normalized in size to 48 x 48. Since the local style emphasizes the shape and color attributes, 96-dimensional RGB color histogram features and 279-dimensional HOG features are extracted from 30 human local patches to jointly represent a garment patch. And (4) performing fine-grained attribute labeling on the clothing images in the clothing commodity image database. According to the marked clothing attributes, training and learning are carried out, and a corresponding clothing style identification model is obtained. Each garment region was trained to a model, and the upper body included a neck region (to determine the type of collar), left and right shoulders (to determine whether a sling or vest is present), left and right elbows (to determine whether half sleeves are present), and left and right wrists (to determine whether long sleeves are present). The lower body, the left and right knees (whether short pants are present or not), and the left and right ankles (whether long pants are present or not).
In the above steps, the posture estimation is to train two human body posture models of the upper body and the lower body respectively, and the body skeleton point target formula is as follows:
Figure BDA0001541943780000051
where t represents the class score of a bone point (i.e., to which bone point it belongs), w is a parameter of the model, p represents the coordinate location of a bone point,
Figure BDA0001541943780000052
is a characteristic function.
In the clothing retrieval step, the human body posture estimation is utilized to determine the human body trunk area of the clothing commodity image, and the characteristics are extracted for similarity retrieval.
In the clothing retrieval step, because the clothing trunk area is large in area and changes a lot, the clothing trunk part is represented by extracting depth features by using a VGG model trained in ImageNet.
In the steps, when a daily clothing image is input to search for similar clothing commodity images, human body posture estimation is carried out on the daily clothing image, a trunk area is determined, and depth features are extracted. Then, comparing the characteristic of the input image with the Euclidean distance of each image characteristic in the database, and sorting according to the distance from small to large.
In the clothing style identification step, firstly, posture estimation is carried out on input daily clothing images by using a trained human body model, and local areas of clothing are normalized into a patch of 48 gamma 48, 96-dimensional RGB color histogram features and 279-dimensional HOG combined features. Inputting the characteristics of different areas into corresponding 11 classifiers to respectively judge the clothing style.
And finally, rearranging the retrieval result by using the clothing style, wherein the clothing shape is greatly influenced by different postures of the human body, and only stable trunk characteristics are extracted in the primary retrieval process to perform primary retrieval. Then, the result of the body part feature search is optimized using the garment attributes obtained by identifying the local garment style. And removing the clothes with the non-conforming attributes, and finally keeping the most-ranked 20 clothes images which are most front on the premise of corresponding attributes.
The method eliminates the difference between the images of the online commodity clothes and the images of the daily life clothes on the basis of rendering and superposing the background of the images, determines the human body region by utilizing posture estimation, performs retrieval by utilizing trunk clothes characteristics, identifies the style of the local clothes by utilizing a very wide sparse coding machine learning algorithm, optimizes the retrieval result and completes the accurate clothes retrieval across scenes.
The technical content which is not described in the above mode can be realized by adopting or referring to the prior art.
It is noted that those skilled in the art, having the benefit of the teachings of this specification, may effect these and other changes in a manner similar to the ones described above, or which obvious modifications are possible. All such variations are intended to be within the scope of the present invention.

Claims (4)

1. A cross-scene clothing retrieval method based on image rendering is characterized by comprising the following steps:
a, because the daily clothing image has complex scene illumination and random human body posture, and the online clothing commodity image has pure background and fixed model posture, the clothing commodity image and the daily clothing image have great difference and belong to different scenes, therefore, the clothing commodity image is rendered at first and the daily scene background is superimposed;
b, classifying the fine-grained clothing styles, namely determining a local clothing area by utilizing attitude estimation, then extracting the color and shape characteristics of the local clothing area, expressing the clothing characteristics together, carrying out classification learning by sparse coding aiming at different local clothing areas, and correspondingly obtaining a corresponding clothing style identification model;
and c, a step of clothing retrieval, which is to extract depth features by using a VGG model trained on ImageNet to perform preliminary clothing retrieval on the clothing body part determined by using posture estimation, perform attribute identification by using a style classifier obtained by training, and perform rearrangement optimization on a retrieval result according to attributes to obtain a final clothing retrieval result.
2. The cross-scene clothing retrieval method based on image rendering as claimed in claim 1, wherein: in the step a, the human body is segmented by utilizing the significance of each clothing commodity image, different illumination is utilized for rendering, the daily life background is superposed, a series of clothing images with different illumination and different backgrounds are generated, the difference between the clothing commodity image and the daily clothing image in the database is eliminated, and the database is expanded.
3. The cross-scene clothing retrieval method based on image rendering as claimed in claim 2, wherein: in the step a, in order to determine each local area of the human body, models of the upper body and the lower body, 19 upper body models and 11 lower body models are learned on line, because the local style emphasizes shape and color attributes, 96-dimensional RGB color features are extracted from 30 local patches of the human body, 279-dimensional HOG features jointly represent clothing patches, training and learning are carried out according to marked clothing attributes, and corresponding clothing style recognition models are obtained
4. The cross-scene clothing retrieval method based on image rendering as claimed in claim 3, wherein: in the step a, on the clothes body part determined by the posture estimation, extracting depth features by using a VGG model trained on ImageNet to perform preliminary clothes retrieval, performing style identification by using a style classifier obtained by training, performing rearrangement optimization on a retrieval result according to attributes, and excluding clothes images which do not correspond to the attributes.
CN201810015943.XA 2018-01-08 2018-01-08 Cross-scene clothing retrieval method based on image rendering Expired - Fee Related CN108109055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810015943.XA CN108109055B (en) 2018-01-08 2018-01-08 Cross-scene clothing retrieval method based on image rendering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810015943.XA CN108109055B (en) 2018-01-08 2018-01-08 Cross-scene clothing retrieval method based on image rendering

Publications (2)

Publication Number Publication Date
CN108109055A CN108109055A (en) 2018-06-01
CN108109055B true CN108109055B (en) 2021-04-30

Family

ID=62218563

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810015943.XA Expired - Fee Related CN108109055B (en) 2018-01-08 2018-01-08 Cross-scene clothing retrieval method based on image rendering

Country Status (1)

Country Link
CN (1) CN108109055B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583481B (en) * 2018-11-13 2021-08-10 杭州电子科技大学 Fine-grained clothing attribute identification method based on convolutional neural network
US11321769B2 (en) * 2018-11-14 2022-05-03 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for automatically generating three-dimensional virtual garment model using product description
KR20200092265A (en) * 2019-01-24 2020-08-03 삼성전자주식회사 Electronic device and operating methods for the same
CN110147733B (en) * 2019-04-16 2020-04-14 北京航空航天大学 Cross-domain large-range scene generation method
CN111951080A (en) * 2020-09-22 2020-11-17 肆嘉(上海)商务咨询有限公司 Method and system for integrating artificial intelligence into platform
CN113869435A (en) * 2021-09-30 2021-12-31 北京爱奇艺科技有限公司 Image processing method, image processing device, clothing identification method, clothing identification device, equipment and storage medium
CN115222862B (en) * 2022-06-29 2024-03-01 支付宝(杭州)信息技术有限公司 Virtual human clothing generation method, device, equipment, medium and program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104992179A (en) * 2015-06-23 2015-10-21 浙江大学 Fine-grained convolutional neural network-based clothes recommendation method
CN106126579A (en) * 2016-06-17 2016-11-16 北京市商汤科技开发有限公司 Object identification method and device, data processing equipment and terminal unit
CN106156297A (en) * 2016-06-29 2016-11-23 北京小米移动软件有限公司 Method and device recommended by dress ornament
CN107463762A (en) * 2016-06-03 2017-12-12 阿里巴巴集团控股有限公司 A kind of man-machine interaction method, device and electronic equipment
CN107832773A (en) * 2017-09-22 2018-03-23 华南师范大学 A kind of scene matching method and device
JP2020098441A (en) * 2018-12-18 2020-06-25 ヒロテック株式会社 Arrival order determination method, arrival order determination device and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104992179A (en) * 2015-06-23 2015-10-21 浙江大学 Fine-grained convolutional neural network-based clothes recommendation method
CN107463762A (en) * 2016-06-03 2017-12-12 阿里巴巴集团控股有限公司 A kind of man-machine interaction method, device and electronic equipment
CN106126579A (en) * 2016-06-17 2016-11-16 北京市商汤科技开发有限公司 Object identification method and device, data processing equipment and terminal unit
CN106156297A (en) * 2016-06-29 2016-11-23 北京小米移动软件有限公司 Method and device recommended by dress ornament
CN107832773A (en) * 2017-09-22 2018-03-23 华南师范大学 A kind of scene matching method and device
JP2020098441A (en) * 2018-12-18 2020-06-25 ヒロテック株式会社 Arrival order determination method, arrival order determination device and program

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Skeleton pruning as trade-off between skeleton simplicity and reconstruction error;Wei Shen et al;《Science China Information Sciences》;20160108;第2-10页 *
基于深度神经网络的同款服装检索***;罗晓;《中国优秀硕士论文全文数据库》;20160115;全文 *

Also Published As

Publication number Publication date
CN108109055A (en) 2018-06-01

Similar Documents

Publication Publication Date Title
CN108109055B (en) Cross-scene clothing retrieval method based on image rendering
Kang et al. Complete the look: Scene-based complementary product recommendation
CN109670591B (en) Neural network training method and image matching method and device
Liu et al. Fashion parsing with weak color-category labels
Liu et al. Hi, magic closet, tell me what to wear!
Di et al. Style finder: Fine-grained clothing style detection and retrieval
Hara et al. Fashion apparel detection: the role of deep convolutional neural network and pose-dependent priors
Lin et al. Rapid clothing retrieval via deep learning of binary codes and hierarchical search
CN109614508B (en) Garment image searching method based on deep learning
CN101853295B (en) Image search method
CN106021603A (en) Garment image retrieval method based on segmentation and feature matching
CN106126581A (en) Cartographical sketching image search method based on degree of depth study
CN105260747B (en) Clothing recognition methods based on clothing co-occurrence information and multi-task learning
CN110334687A (en) A kind of pedestrian retrieval Enhancement Method based on pedestrian detection, attribute study and pedestrian's identification
Cychnerski et al. Clothes detection and classification using convolutional neural networks
CN104281572B (en) A kind of target matching method and its system based on mutual information
CN110647906A (en) Clothing target detection method based on fast R-CNN method
CN109145947B (en) Fashion women's dress image fine-grained classification method based on part detection and visual features
Li et al. Cross-scenario clothing retrieval and fine-grained style recognition
Usmani et al. Enhanced deep learning framework for fine-grained segmentation of fashion and apparel
Zhang et al. Warpclothingout: A stepwise framework for clothes translation from the human body to tiled images
Tena et al. Content-based image retrieval for fabric images: A survey
Wang Classification and identification of garment images based on deep learning
CN111159456B (en) Multi-scale clothing retrieval method and system based on deep learning and traditional features
Lasserre et al. Studio2shop: from studio photo shoots to fashion articles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210430

CF01 Termination of patent right due to non-payment of annual fee