CN107506793B - Garment identification method and system based on weakly labeled image - Google Patents

Garment identification method and system based on weakly labeled image Download PDF

Info

Publication number
CN107506793B
CN107506793B CN201710719635.0A CN201710719635A CN107506793B CN 107506793 B CN107506793 B CN 107506793B CN 201710719635 A CN201710719635 A CN 201710719635A CN 107506793 B CN107506793 B CN 107506793B
Authority
CN
China
Prior art keywords
sample data
weakly
labeled
training
initial model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710719635.0A
Other languages
Chinese (zh)
Other versions
CN107506793A (en
Inventor
徐卉
程诚
周祥东
石宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Institute of Green and Intelligent Technology of CAS
Original Assignee
Chongqing Institute of Green and Intelligent Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Institute of Green and Intelligent Technology of CAS filed Critical Chongqing Institute of Green and Intelligent Technology of CAS
Priority to CN201710719635.0A priority Critical patent/CN107506793B/en
Publication of CN107506793A publication Critical patent/CN107506793A/en
Application granted granted Critical
Publication of CN107506793B publication Critical patent/CN107506793B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/043Architecture, e.g. interconnection topology based on fuzzy logic, fuzzy membership or fuzzy inference, e.g. adaptive neuro-fuzzy inference systems [ANFIS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Fuzzy Systems (AREA)
  • Computational Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a garment identification method and system based on a weakly labeled image, wherein the method comprises the steps of carrying out algorithm training on a completely labeled sample to obtain an initial model; obtaining effective information from the fully labeled sample and the weakly labeled sample at the same time, and further training the initial model to obtain a final model; clothing image recognition is carried out through the final model; according to the image recognition method, a depth framework is constructed through a multi-scale acceptance module under a depth learning framework, multi-scale information of an image can be better and comprehensively input, information is simultaneously obtained from a complete marking sample and a weak marking sample through a double-layer sample selection strategy, a negative sample which is difficult to distinguish is excavated, the accuracy of image recognition is improved, and particularly when the method is applied to the field of clothes recognition, the accuracy of the retrieval of the same type of clothes is improved.

Description

Garment identification method and system based on weakly labeled image
Technical Field
The invention relates to the field of computer application, in particular to a garment identification method and system based on a weakly labeled image.
Background
Deep learning is a method for performing characterization learning on data in machine learning, and has the advantage that non-supervised or semi-supervised feature learning and a layered feature extraction efficient algorithm are used for replacing manual feature acquisition. Deep learning is a new field in machine learning research, and the motivation is to establish and simulate a neural network of human brain for analytical learning, and to simulate the mechanism of human brain to interpret data, such as images, sounds, texts, etc.
At present, a large amount of training data is needed for training deep learning (including face recognition, clothes recognition and the like), for example, a deep face model (FaceBook team), a deep id model (CUHK team), a FaceNet model (Google team), dozens of to millions of data are needed, the data are needed to be completely labeled, and the labeling of the large amount of data costs a lot of time and labor. The existing deep learning algorithm needs to label a large amount of data manually. Taking the garment image recognition as an example, the training process requires a large amount of garment image data with labeled information (such as color, season, sleeve length, etc.), and the acquisition of data and the labeling of data require a large amount of labor and time. Although the current network is rapidly developed and can obtain a large amount of data through the network, the data has no complete label information, only some keyword labels on the network are not complete enough, and even some label information is wrong and cannot meet the requirement of a training model, but the information of the image can be utilized. Therefore, a new method is needed, which can make full use of the weakly labeled data, improve the working efficiency and the recognition accuracy, and reduce the waste of resources.
Disclosure of Invention
In view of the above drawbacks of the prior art, the present invention provides a method and a system for garment identification based on weakly labeled images to solve the above technical problems.
The invention provides a garment identification method based on a weakly labeled image, which comprises the following steps:
collecting completely marked sample data and weakly marked sample data;
carrying out algorithm training on the completely marked sample data to obtain an initial model;
meanwhile, effective information is obtained from the completely marked sample data and the weakly marked sample data, the initial model is further trained according to the effective information, and a final model is obtained;
and (5) carrying out garment image recognition through the final model.
Further, the initial model is of a multilayer structure, the multilayer structure at least comprises a plurality of convolution layers, an initiation layer, a pooling layer and a full-link layer, and outputs of the initiation layers are merged and added to the last full-link layer to form the multi-scale convolution neural network structure.
Further, the initial model further comprises a softmax loss layer for feature classification and a triplet loss layer for similarity sorting, and the fully labeled samples are trained through a feature classification function and a similarity sorting mixed function to establish the initial model.
Further, training the fully labeled sample data through a mixed loss function of feature classification and similarity ranking, wherein the mixed loss function of feature classification and similarity ranking is as follows:
L(d1,d2,d3)=Lcls(d1,d2,d3)+λ*Lrank(d1,d2,d3)
where λ is a weight parameter between the feature classification signal and the similarity ranking signal, Lcls(d1,d2,d3) Is a feature classification loss function, Lrank(d1,d2,d3) Is a similarity order loss function, d1,d2,d3Respectively training samples.
Further, the obtaining valid information from the fully labeled sample and the weakly labeled sample simultaneously includes:
inputting a plurality of complete labeling sample data sets and weak labeling sample data sets as training sets into the initial model for training;
respectively obtaining the weight of the complete labeled sample data, the weight of the weakly labeled sample data in a training set and a weight threshold value;
and comparing the weight of the weakly labeled sample data with a weight threshold value, and screening the weakly labeled sample data in the weakly labeled sample data set.
Further, the weight threshold is obtained by the following formula:
Figure GDA0002767904660000021
wherein d iscTo weight threshold, Mc is the number of classes of the fully labeled dataset, SiW (p) is the weight of the training sample p in the training set, and N is the number of image data included in each category.
Further, the obtaining effective information from the fully labeled sample and the weakly labeled sample simultaneously further includes: and (3) calculating confusable class pairs in a training set of the initial model, acquiring the ambiguity through the initial model, and performing secondary training on the initial model according to the ambiguity.
Further, the fuzzy matrix is obtained by the following formula:
Figure GDA0002767904660000022
wherein N isiNumber of samples of class i, Ni→jTo misinterpret the class i as the number of samples of the class j by the initial model, fijIs the ambiguity between categories i and j.
The invention also provides a computer-readable storage medium, on which a computer program is stored which, when executed by a processor, implements the above-described method.
The invention also provides a clothing identification system based on the weakly labeled image, which comprises the following steps:
the acquisition unit is used for acquiring the complete annotation sample data and the weak annotation sample data;
a memory for storing a computer program;
a processor for executing a computer program stored in the memory to cause the garment identification system based on the weakly labeled image to perform the method.
The invention has the beneficial effects that: according to the image identification method, a depth framework is constructed through a multi-scale acceptance module under a depth learning framework, multi-scale information of an image can be better and comprehensively input, information is simultaneously obtained from a complete marking sample and a weak marking sample through a double-layer sample selection strategy, a negative sample which is difficult to distinguish is excavated, the accuracy of image identification is improved, and particularly when the method is applied to the field of clothes identification, the accuracy of same-style clothes retrieval is improved.
Drawings
Fig. 1 is a schematic diagram of an image recognition method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a sample selection framework in the image recognition method according to the embodiment of the present invention.
Fig. 3 is a schematic structural diagram of an image recognition system in an embodiment of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
As shown in fig. 1, the image recognition method in this embodiment includes:
collecting completely labeled sample data A and weakly labeled sample data B;
carrying out algorithm training on the completely labeled sample data A to obtain an initial model;
meanwhile, effective information is obtained from the completely labeled sample data A and the weakly labeled sample data B, the initial model is further trained according to the effective information, and a final model is obtained;
and (5) carrying out garment image recognition through the final model after further training.
In the present embodiment, since the image recognition is performed by the deep learning, it is most important to train the model. Taking garment identification as an example, a large amount of garment image data with labeled information (such as color, season, sleeve length and the like) is needed in the training process, and a large amount of labor and time are needed for acquiring data and labeling data. At present, the network is developed rapidly, a large amount of data can be obtained through the network, but the data has no complete labeling information, only some keyword labels on the network are not complete enough, even some labeling information is wrong, the requirement of a training model cannot be met, but the information of the image can be utilized. In order to fully utilize the weakly labeled data, the embodiment performs algorithm training on a limited number of completely labeled sample data a to obtain an initial Multi-scale inclusion CNN (MICNN) model, and obtains a large amount of Multi-scale information for image recognition; a simple and effective double-layer sample selection strategy is adopted, and a sample A and a sample B are fully utilized, namely, a fine-tune (further training) MICNN model is used for image recognition, particularly clothes recognition.
Taking garment identification as an example, the depth multi-scale initiation convolutional neural network (MICNN) in the present embodiment acquires multi-scale information by integrating features of each layer. The initial MICNN model is then trained using the fully labeled garment picture, a two-layer sample selection strategy is employed based on this initial model, and the weakly labeled sample, the fine-tune initial model, is utilized (further training on the basis of the initial model). The double-layer sample selection strategy is divided into a sample layer and a category layer, and the correlation between a completely labeled sample and a weakly labeled sample is modeled in the sample layer and used for anomaly detection. And estimating a fuzzy matrix through the current MICNN model at the class level, and selecting a sample pair with the class easy to misjudge for secondary training.
The initial model in this embodiment is a multi-layer MICNN network, the multi-layer structure at least includes a plurality of convolution layers, an initiation layer, a pooling layer, and a fully-connected layer, and outputs of the plurality of initiation layers are merged and added to the last fully-connected layer to form a multi-scale convolution neural network structure. In the embodiment, the multi-scale information identified by the clothes is better utilized through the multi-scale inclusion CNN framework. Specifically, the MICNN model is trained by minimizing both the SoftMax loss function (for feature classification) and the triple loss function (for similarity ranking). The overall MICNN network comprises 2 convolution layers, 9 initiation layers, 5 pooling layers, 3 full link layers, 1 softmax loss layer and a triplet loss layer. The outputs of the 9 initiation layers are merged and added to the last fully linked layer. The initial MICNN model was trained using the limited sample A, following the above network structure.
In this embodiment, assume a set of training samples
Figure GDA0002767904660000041
Wherein the sample xi∈RmAnd category yiE {1, 2.., c } is related, c is the number of classes, xiFeatures representing a sample of a picture, yiReferring to the category corresponding to x, R represents a training data set (or a feature set of training pictures), m represents the dimension of the feature, where each picture has m features, and the mixing loss function of feature classification and similarity ranking is as follows:
L(d1,d2,d3)=Lcls(d1,d2,d3)+λ*Lrank(d1,d2,d3),
where λ is the feature classification signal LclsAnd a similarity ranking signal LrankWeight parameter in between, and LclsAnd LrankThe definition is as follows:
Lcls(d1,d2,d3)=cls(x1,y1)+cls(x2,y2)+cls(x3,y3),
Figure GDA0002767904660000051
feature classification loss function Lcls(d1,d2,d3) Is defined for a triple sample and the classification signal cls (x)i,yi)=-logpiIs a standard cross-entropy or logarithmic loss function, where piIs the target probability distribution. When y isiWhen in the target class piOther than 1, pi0. Similarity rank penalty function Lrank(d1,d2,d3) Is based on a large number of triplets, each of whichA triplet contains three image samples x1,x2And x3Wherein x is1,x2Two images, x, of the same garment3Representing an image of another garment, and Δ is a parameter that adjusts the distance separation of the two sets of samples (set to 0.2 in this embodiment). The mixture loss function through feature classification and similarity ordering can make the sample spacing of the same class smaller than the sample spacing of different classes.
In this embodiment, since there is much noise in the training set of weakly labeled data, it is critical to select samples whether the deep CNN model can be effectively trained using these samples. In addition, the sample selection is also an effective means for improving the convergence rate of deep learning. In this embodiment, a two-layer training sample selection strategy is adopted to select samples, and a sample a and a sample B are used to define-tuning (further training) the MICNN model. The overall structure of the sample selection strategy is shown in fig. 2, and a high-quality sample selection process is realized at the sample level and the category level.
In this embodiment, a data set of M categories, each containing N image data, is selected as a training set to be input to the initial model, where M is Mc+Mw,McNumber of classes, M, representing a fully annotated datasetwRepresenting the number of classes of weakly labeled datasets, SiRepresenting the dataset in which category i is located, p and q represent two images of clothing, respectively, and f (p) represents the 128-dimensional features obtained by the current MICNN model. dt(p, q) represents the Euclidean distance between f (p) and f (q), cpIndicating the category to which the image p belongs, p being contained within the sample a.
The weight of the training sample p in the training set is:
Figure GDA0002767904660000052
after weight screening in sample A, a threshold d is calculatedc
Figure GDA0002767904660000053
In the training stage, after the weight of the sample B is calculated, if the weight of the sample B is more than 2 x dcThen the sample is ignored. Thus, N 'images remain per class, forming a triplet number N' × M (M-1) × N '(N' -1), preferably the first 20% being picked up for training.
In this embodiment, in order to improve the convergence rate of model training, confusable class pairs are included in the training set of the initial model, the ambiguity is obtained through the initial model, and the initial model is retrained according to the ambiguityij∈Rc*c) For the second round of training, the ambiguity between categories i and j is:
Figure GDA0002767904660000061
wherein N isiIs the number of samples of class i, Ni→jThe number of samples for which the category i is misjudged as the category j by the MICNN model.
The present embodiment also provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements any of the methods in the present embodiments.
The present embodiment further provides an electronic terminal, including: a processor and a memory;
the memory is used for storing computer programs, and the processor is used for executing the computer programs stored by the memory so as to enable the terminal to execute the method in the embodiment.
The computer-readable storage medium in the present embodiment can be understood by those skilled in the art as follows: all or part of the steps for implementing the above method embodiments may be performed by hardware associated with a computer program. The aforementioned computer program may be stored in a computer readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The present embodiment also provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements any one of the methods for clothing identification based on weakly labeled images in the present embodiment.
The embodiment further provides a garment identification system based on the weakly labeled image, which includes:
the acquisition unit is used for acquiring the complete annotation sample data and the weak annotation sample data;
a memory for storing a computer program;
a processor for executing the computer program stored in the memory to make the garment identification system based on the weak labeled image execute any one of the garment identification methods based on the weak labeled image as in the embodiment.
In this embodiment, the memory may include a Random Access Memory (RAM), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory. The processor may be a general-purpose processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the integrated circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components.
Those skilled in the art should understand that the present embodiment only takes clothing identification as an example, and the method in the present embodiment can fully utilize the pictures of the people's clothing in various scenes and under various environments on the network, so as to greatly improve the clothing identification rate, and can be commercially applied to the analysis of the user who purchases the clothing, the clothing recommendation and the matching; in the actual using process, the method in this embodiment may also be applied to other image recognition fields, for example, in security, in areas such as train stations and airports, for suspect detection, and the like, and a person skilled in the art should not use this as a limitation, and further description thereof is omitted.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (9)

1. A clothing identification method based on a weakly labeled image is characterized by comprising the following steps:
collecting completely marked sample data and weakly marked sample data;
carrying out algorithm training on the completely marked sample data to obtain an initial model;
meanwhile, effective information is obtained from the completely marked sample data and the weakly marked sample data, the initial model is further trained according to the effective information, and a final model is obtained;
clothing image recognition is carried out through the final model;
the obtaining of the effective information from the completely labeled sample data and the weakly labeled sample data at the same time includes:
inputting a plurality of completely labeled sample data sets and weakly labeled sample data sets serving as training sets into the initial model for training;
respectively obtaining the weight of the completely marked sample data, the weight of the weakly marked sample data in a training set and a weight threshold value;
and comparing the weight of the weakly labeled sample data with a weight threshold value, and screening the weakly labeled sample data in the weakly labeled sample data set.
2. The garment identification method based on the weakly labeled image as claimed in claim 1, wherein: the initial model is of a multilayer structure, the multilayer structure at least comprises a plurality of convolution layers, an initiation layer, a pooling layer and a full-link layer, and the outputs of the initiation layers are merged and added to the last full-link layer to form the multi-scale convolution neural network structure.
3. The garment identification method based on the weakly labeled image as claimed in claim 2, characterized in that: the initial model further comprises a softmax loss layer used for feature classification and a triplet loss layer used for similarity sorting, and the fully labeled sample data is trained through a mixed loss function of the feature classification and the similarity sorting to build the initial model.
4. The garment identification method based on the weakly labeled image as claimed in claim 3, wherein: training fully labeled sample data through a mixed loss function of feature classification and similarity ranking, wherein the mixed loss function of feature classification and similarity ranking is as follows:
L(d1,d2,d3)=Lcls(d1,d2,d3)+λ*Lrank(d1,d2,d3)
where λ is a weight parameter between the feature classification signal and the similarity ranking signal, Lcls(d1,d2,d3) Is a feature classification loss function, Lrank(d1,d2,d3) Is a similarity order loss function, d1,d2,d3Are training samples, L (d), respectively1,d2,d3) Is a mixture loss function of feature classification and similarity ranking.
5. The method for identifying clothing based on weakly labeled images as claimed in claim 1, wherein the weight threshold is obtained by the following formula:
Figure FDA0002767904650000011
wherein d iscTo weight threshold, Mc is the number of classes of the fully labeled dataset, SiW (p) is the weight of the training sample p in the training set, and N is the number of image data included in each category.
6. The method for recognizing clothing based on weakly labeled image according to claim 1, wherein the step of obtaining valid information from the fully labeled sample data and the weakly labeled sample data at the same time further comprises: and (3) calculating confusable class pairs in a training set of the initial model, acquiring the ambiguity through the initial model, and performing secondary training on the initial model according to the ambiguity.
7. The method for recognizing clothing based on weakly labeled image as claimed in claim 6, wherein the ambiguity is obtained by the following formula:
Figure FDA0002767904650000021
wherein N isiNumber of samples of class i, Ni→jTo misinterpret the class i as the number of samples of the class j by the initial model, fijIs the ambiguity between categories i and j.
8. A computer-readable storage medium having stored thereon a computer program, characterized in that: the program when executed by a processor implements the method of any one of claims 1 to 7.
9. A garment identification system based on weakly labeled images, comprising:
the acquisition unit is used for acquiring the complete annotation sample data and the weak annotation sample data;
a memory for storing a computer program;
a processor for executing a memory-stored computer program for causing the weakly labeled image based garment identification system to perform the method of any one of claims 1 to 7.
CN201710719635.0A 2017-08-21 2017-08-21 Garment identification method and system based on weakly labeled image Active CN107506793B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710719635.0A CN107506793B (en) 2017-08-21 2017-08-21 Garment identification method and system based on weakly labeled image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710719635.0A CN107506793B (en) 2017-08-21 2017-08-21 Garment identification method and system based on weakly labeled image

Publications (2)

Publication Number Publication Date
CN107506793A CN107506793A (en) 2017-12-22
CN107506793B true CN107506793B (en) 2020-12-18

Family

ID=60691318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710719635.0A Active CN107506793B (en) 2017-08-21 2017-08-21 Garment identification method and system based on weakly labeled image

Country Status (1)

Country Link
CN (1) CN107506793B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10860888B2 (en) * 2018-01-05 2020-12-08 Whirlpool Corporation Detecting objects in images
CN108230526A (en) * 2018-04-17 2018-06-29 济南浪潮高新科技投资发展有限公司 A kind of intelligent entrance guard method based on deep learning
CN111263946A (en) * 2018-05-15 2020-06-09 合刃科技(武汉)有限公司 Object recognition method and computer-readable storage medium
CN109035558B (en) * 2018-06-12 2020-08-25 武汉市哈哈便利科技有限公司 Commodity recognition algorithm online learning system for unmanned sales counter
CN109034205B (en) * 2018-06-29 2021-02-02 西安交通大学 Image classification method based on direct-push type semi-supervised deep learning
CN109145828B (en) * 2018-08-24 2020-12-25 北京字节跳动网络技术有限公司 Method and apparatus for generating video category detection model
CN109558821B (en) * 2018-11-21 2021-10-22 哈尔滨工业大学(深圳) Method for calculating number of clothes of specific character in video
CN109977262B (en) * 2019-03-25 2021-11-16 北京旷视科技有限公司 Method and device for acquiring candidate segments from video and processing equipment
CN110110080A (en) * 2019-03-29 2019-08-09 平安科技(深圳)有限公司 Textual classification model training method, device, computer equipment and storage medium
CN110313894A (en) * 2019-04-15 2019-10-11 四川大学 Arrhythmia cordis sorting algorithm based on convolutional neural networks
CN110674688B (en) * 2019-08-19 2023-10-31 深圳力维智联技术有限公司 Face recognition model acquisition method, system and medium for video monitoring scene
CN111291802B (en) * 2020-01-21 2023-12-12 华为技术有限公司 Data labeling method and device
CN113033636B (en) * 2021-03-17 2022-11-29 济南国科医工科技发展有限公司 Automatic ovarian tumor identification system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104573669A (en) * 2015-01-27 2015-04-29 中国科学院自动化研究所 Image object detection method
CN105868774A (en) * 2016-03-24 2016-08-17 西安电子科技大学 Selective search and convolutional neural network based vehicle logo recognition method
CN106951867A (en) * 2017-03-22 2017-07-14 成都擎天树科技有限公司 Face identification method, device, system and equipment based on convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10691744B2 (en) * 2014-06-26 2020-06-23 Amazon Technologies, Inc. Determining affiliated colors from keyword searches of color palettes

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104573669A (en) * 2015-01-27 2015-04-29 中国科学院自动化研究所 Image object detection method
CN105868774A (en) * 2016-03-24 2016-08-17 西安电子科技大学 Selective search and convolutional neural network based vehicle logo recognition method
CN106951867A (en) * 2017-03-22 2017-07-14 成都擎天树科技有限公司 Face identification method, device, system and equipment based on convolutional neural networks

Also Published As

Publication number Publication date
CN107506793A (en) 2017-12-22

Similar Documents

Publication Publication Date Title
CN107506793B (en) Garment identification method and system based on weakly labeled image
Mou et al. Vehicle instance segmentation from aerial image and video using a multitask learning residual fully convolutional network
Chaudhuri et al. Multilabel remote sensing image retrieval using a semisupervised graph-theoretic method
Zhang et al. Detection of co-salient objects by looking deep and wide
Kao et al. Visual aesthetic quality assessment with a regression model
CN107066559B (en) Three-dimensional model retrieval method based on deep learning
CN105869173B (en) A kind of stereoscopic vision conspicuousness detection method
Sudderth et al. Shared segmentation of natural scenes using dependent Pitman-Yor processes
Patterson et al. The sun attribute database: Beyond categories for deeper scene understanding
CN105844283B (en) Method, image search method and the device of image classification ownership for identification
CN107679250A (en) A kind of multitask layered image search method based on depth own coding convolutional neural networks
CN108229588B (en) Machine learning identification method based on deep learning
CN106844518B (en) A kind of imperfect cross-module state search method based on sub-space learning
CN103186538A (en) Image classification method, image classification device, image retrieval method and image retrieval device
CN108229347A (en) For the method and apparatus of the deep layer displacement of the plan gibbs structure sampling of people's identification
CN106575280A (en) System and methods for analysis of user-associated images to generate non-user generated labels and utilization of the generated labels
Rad et al. Image annotation using multi-view non-negative matrix factorization with different number of basis vectors
CN109829065B (en) Image retrieval method, device, equipment and computer readable storage medium
CN107590505B (en) Learning method combining low-rank representation and sparse regression
CN110992217B (en) Method and device for expressing and searching multi-view features of design patent
Chen et al. Recognizing the style of visual arts via adaptive cross-layer correlation
Li et al. Dating ancient paintings of Mogao Grottoes using deeply learnt visual codes
Simon et al. Scene segmentation using the wisdom of crowds
Bacha et al. Event recognition in photo albums using probabilistic graphical models and feature relevance
Abbas et al. Deep neural networks for automatic flower species localization and recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant