CN110516718A - The zero sample learning method based on depth embedded space - Google Patents

The zero sample learning method based on depth embedded space Download PDF

Info

Publication number
CN110516718A
CN110516718A CN201910740748.8A CN201910740748A CN110516718A CN 110516718 A CN110516718 A CN 110516718A CN 201910740748 A CN201910740748 A CN 201910740748A CN 110516718 A CN110516718 A CN 110516718A
Authority
CN
China
Prior art keywords
sample
label
depth
classification
embedded space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910740748.8A
Other languages
Chinese (zh)
Other versions
CN110516718B (en
Inventor
魏巍
张磊
聂江涛
王聪
张艳宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwest University of Technology
Original Assignee
Northwest University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwest University of Technology filed Critical Northwest University of Technology
Priority to CN201910740748.8A priority Critical patent/CN110516718B/en
Publication of CN110516718A publication Critical patent/CN110516718A/en
Application granted granted Critical
Publication of CN110516718B publication Critical patent/CN110516718B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The zero sample learning method based on depth embedded space that the invention discloses a kind of, for solving the technical problem of existing zero sample learning method generalization ability difference.Technical solution is to learn an effectively depth intermediary embedded space by depth learning technology, by trained depth network the semantic classes description by known class and unknown classification and image information describe to be mapped in the depth intermediary embedded space simultaneously, classify to the feature in embedded space finally by corresponding classifier to obtain corresponding prediction label.During prediction, mapping network self-learning algorithm is used, generalization ability is effectively promoted, that is, improves the classification accuracy to unknown classification sample.

Description

The zero sample learning method based on depth embedded space
Technical field
The present invention relates to a kind of zero sample learning methods, more particularly to a kind of zero sample based on depth embedded space Learning method.
Background technique
In recent years, deep neural network has achieved in numerous computer vision applications such as target identification, detection aobvious The effect of work.It is successfully it is critical that based on largely with markd study sample, using supervised learning method, sufficiently It plays the extremely strong nonlinear fitting ability of deep neural network and comes existing labyrinth pass between mining task input, output System.However, in practical applications, higher cost is needed since handmarking learns sample, especially in relative complex task In, such as semantic segmentation etc., thus be often difficult to obtain it is sufficient, with markd study sample, or even in many applications, Any with markd study sample, (for example, for emerging substance or unknown environment etc.) can not be obtained, from And the generalization ability of deep neural network is seriously affected.
Document " Y.Annadani and S.Biswas.Preserving semantic relations for zero- shot learning.In Proceedings of the IEEE Conference on Computer Vision and The method based on zero sample learning that Pattern Recognition, pages 7603-7612,2018. " is proposed can have Effect solves the above problems.Different from traditional supervised learning, in zero sample learning, each category associations one specific Semantic description, the destination of study are by excavating the connection between the sample and its corresponding semantic description in classification, realization pair The sample of unknown classification (without the training sample of any tape label) is accurately classified, is identified.The key of zero sample learning It is to learn an effective embedded space, can accurately establishes the structural relation between the corresponding semantic description of classification, and It is extensive to arrive unknown classification and associated semantic description.However, existing zero sample learning model, fails to fully consider that insertion is empty Between in architectural characteristic, thus usually influenced by hubness and bias towards seen classes problem, it is extensive Ability is limited.
Summary of the invention
In order to overcome the shortcomings of that existing zero sample learning method generalization ability is poor, the present invention provides a kind of based on depth insertion The zero sample learning method in space.This method learns an effectively depth intermediary embedded space by depth learning technology, By trained depth network the semantic classes description by known class and unknown classification and image information describe to reflect simultaneously It is mapped in the depth intermediary embedded space, classifies the feature in embedded space to obtain finally by corresponding classifier Corresponding prediction label.During prediction, mapping network self-learning algorithm is used, generalization ability is effectively promoted, i.e., Improve the classification accuracy to unknown classification sample.
The technical solution adopted by the present invention to solve the technical problems is: a kind of zero sample based on depth embedded space Learning method, its main feature is that the following steps are included:
Step 1: the training set with N number of sample is expressed asWhereinIndicate that i-th of image pattern length is b, corresponding class label isAndWhat is then indicated is all The tag set of known class.During the test, aiming at for zero sample learning predicts new samples xjAffiliated classification mark LabelWhat is indicated is the tag set of all unknown classifications, andAbout each Know classificationOr unknown classificationThere is a corresponding semantic descriptionOr
Step 2: the depth embedded network of a Ge Liang branch is established, wherein one maps branch, the branching networks for image It is by pretreated depth convolutional network, input is extracted characteristics of image xi, later, pass through a multi-layer perception (MLP)To learn characteristics of image xiIt is embedded into the mapping process in implicit space.Another of two branching networks branches into language Adopted classification maps branch, which also passes through a multi-layer perception (MLP)By semantic description informationIt is mapped to In same implicit embedded space.The loss function of two branching networks is defined as following form,
Wherein, θvAnd θsWhat is indicated is the parameter of multi-layer perception (MLP) involved in two branching networks, and W then refer to by The parameter of the linear classifier learnt, in additionThen refer to Classification Loss, selection intersects entropy function and makees herein Method to calculate Classification Loss.In order to avoid over-fitting, l is used2Norm come limit all parameters and by η weighting about Beam.Above-mentioned loss function is optimized by back-propagation algorithm, to obtain corresponding network parameter θvAnd θs.In Obtain parameter θvAnd θsLater, test samplePrediction label be expressed as,
Wherein, z is expressed as the semantic description information of label y.
Step 3: given test sample, predicts the test sample according to the embedded space that step 2 learns first The pseudo label of collection, later, according to the pseudo label of generation and image-semantic difference, i.e.,It selects Test sample concentrate with M closest test sample of the pseudo label, wherein M=40, and by hand by the sample selected with The pseudo label for assigning it is merged into training set as new training dataIn, the training set that is expanded
Step 4: after obtaining trained mapping network and classifier, in order to avoid the depth insertion to be learnt Space generates the phenomenon that prediction label of unknown sample is to known sample label bias, solves this using automatic adjusument model Problem.New optimization object function is expressed as
Wherein, C indicates unknown classification number,Indicate i-th of selected test sample,WithThen respectively indicate expansion Fill training setIn corresponding pseudo label and generic semantic description.
The beneficial effects of the present invention are: this method learns an effectively depth intermediary insertion by depth learning technology Space, by trained depth network the semantic classes of known class and unknown classification is described simultaneously and image information is retouched State and be mapped in the depth intermediary embedded space, finally by corresponding classifier to the feature in embedded space classify with Obtain corresponding prediction label.During prediction, mapping network self-learning algorithm is used, extensive energy is effectively promoted Power improves the classification accuracy to unknown classification sample.
It elaborates With reference to embodiment to the present invention.
Specific embodiment
The present invention is based on zero sample learning method of depth embedded space, specific step is as follows:
1, data prediction.
Training set with N number of sample is expressed asThe size of training sample set For N, whereinI-th of image feature vector is indicated the length is b, respective classes label is Table Show the tag set of all known class.During the test, aiming at for zero sample learning predicts new samples xjAffiliated Class labelIndicate the tag set of all unknown classifications, andAbout each Known classOr unknown classificationThere is a corresponding semantic feature vector z to be used to describe such another characteristic,Table Show the semantic feature vector in training set,Indicate the semantic feature vector in test set.By taking AwA data set as an example, the data Collection contains 50 different animal species totally 30,745 picture, the semantic feature vector of each semantic classes thereinSuch animal is characterized in the performance in 85 kinds of different characteristics.The training set of the data setIn image sample ThisFor the feature vector that picture corresponding in data set obtains after ResNet101 is handled, the length is 2048, the sample data x in corresponding test setjWith same shape.
2, depth embedded network training.
After data prediction, need respectively to be mapped to characteristics of image and classification semantic feature by establishing depth network In the same Implicit deep embedded space, which may make meets in class between the characteristics of image of insertion and classification semantic feature Compactedness and inter-class separability.Realize that characteristics of image and classification are semantic special by establishing the depth embedded network of a Ge Liang branch The mapping of Implicit deep embedded space is levied, wherein one maps branch for image, another is the mapping point of classification semantic feature Branch.The process that both features are mapped to embedded space is learnt by multi-layer perception (MLP) respectively in the present invention.Image maps branch Network is represented byThe network is by characteristics of image xiIt is mapped to implicit space.θvBranching networks are mapped for image Parameter, and xiThen indicate that i-th of image feature vector, the multi-layer perception (MLP) of the branch are by a full articulamentum (Fully Connected Layer, FC) plus line rectification layer (Rectified Linear Unit, a ReLU) Lai Shixian, wherein entirely The I/O channel size of articulamentum is respectively 2048 and 1024.
Another classification semantic feature mapping branching networks can be then expressed asThe branching networks retouch semanteme State informationIt is mapped in same implicit embedded space.Wherein θsClassification semantic feature maps the parameter of branching networks, and Then indicate classification semantic feature vector corresponding to training sample, the multi-layer perception (MLP) of the branch be then by two full articulamentums and Two line rectification layers are realized.The two full articulamentums are concatenated, and can connect one linearly after each full articulamentum Layer is rectified, wherein the input channel size of first full articulamentum is the size of semantic featureFor AwA data set Its size is 85, and the output channel size of the full articulamentum isAnd second full articulamentum is defeated Channel sized is then 1024 out, and the output phase of input channel size and first full articulamentum is same.
The error function of the depth embedded network is defined as,
W refers to the parameter for the linear classifier that proposed network structure learns in the training process, W in formulaTFor it The transposition of the classifier parameters.In additionClassification Loss function is referred to, to calculate linear classifier to training sample Classification results and correct result difference, CrossEntropy (cross entropy) function be selected as calculate Classification Loss side Method.λ is as coefficient of balance, and value interval is (0.1,0.3), in order to avoid over-fitting, using l2Norm is all to limit It can learning parameter and by η Weighted Constraint.Formula (1) by typical back-propagation algorithm come Optimization Solution, to obtain corresponding Network parameter θvAnd θs.In the training process, learning rate is set as 1e-4, cycle-index T=50 in the present invention.
After corresponding network succeeds in school, it can be classified by following formula to test sample
It is obtainedAs test sample xjPrediction label.
3, data set expands.
Image feature vector and classification semantic feature vector in the depth interpolation space learnt are calculated by formula (2) The distance between, and the smallest M image feature vector of category semantic feature vector of adjusting the distance is divided into such and assigns puppet Label expands training dataset, the training set after expansionIt is represented by
M indicates the number of the test sample of selected imparting pseudo label, and C is unknown classification number, M=40, C in the present invention Changed according to different data collection, is 10 on AwA data set.Indicate i-th of sample for being endowed pseudo label.In addition, It then illustrates that this corresponding pseudo label of M sample corresponds to test sample according to formula (2) prediction label obtained, and usesTo indicate the semantic description of the pseudo label generic.
4, mapping network adaptive learning.
In most of zero sample learnings, the sample only in known class can be considered as training sample to learn to be embedded in Space, therefore, the phenomenon that embedded space learnt can generate the prediction label of unknown sample to known sample label bias.For Better solution this problem, using the automatic adjusument model of a depth embedded space, which can will be without mark The test data of label is applied among the training of model the accuracy rate for improving classification.
The training set after being expandedAfterwards, the objective function of automatic adjusument model is expressed as
What wherein C was indicated is unknown classification number.Step 4 uses the training set expandedIn data to learning to arrive Mapping network carry out automatic adjusument.Each round is adjusted later all can be rightData set is updated according to step 3, more New total round is R=10, and the learning rate of mapping network is 1e-4 during automatic adjusument.After this, by that will update The parameter θ crossedvAnd θsSubstituting into can be to corresponding test sample x in formula (4)jLabel predicted.
The same paper of the method for the present invention " Y.Annadani and S.Biswas.Preserving semantic relations for zero-shot learning.In Proceedings of the IEEE Conference on The PSR method proposed in Computer Vision and Pattern Recognition, pages 7603-7612,2018. " It is compared on AwA data set with the RN method proposed in background technique method, experimental result shows that mentioned method has Preferably performance, for example under traditional zero sample learning experiment, mentioned method is on AwA data set for the overall situation of unknown sample Classification accuracy is higher than existing best method PSR 2.7%.And in general zero sample learning experiment, in identical AwA 5.2% equally is higher by than background technique method RN for the classification accuracy of unknown sample on data set.

Claims (1)

1. a kind of zero sample learning method based on depth embedded space, it is characterised in that the following steps are included:
Step 1: the training set with N number of sample is expressed asWherein Indicate that i-th of image pattern length is b, corresponding class label isAndWhat is then indicated is all known class Tag set;During the test, aiming at for zero sample learning predicts new samples xjAffiliated class label What is indicated is the tag set of all unknown classifications, andAbout each known class NotOr unknown classificationThere is a corresponding semantic descriptionOr
Step 2: establish the depth embedded network of a Ge Liang branch, wherein one maps branch for image, the branching networks be through Pretreated depth convolutional network is crossed, input is extracted characteristics of image xi, later, pass through a multi-layer perception (MLP)To learn characteristics of image xiIt is embedded into the mapping process in implicit space;Another of two branching networks branches into language Adopted classification maps branch, which also passes through a multi-layer perception (MLP)By semantic description informationIt is mapped to In same implicit embedded space;The loss function of two branching networks is defined as following form,
Wherein, θvAnd θsWhat is indicated is the parameter of multi-layer perception (MLP) involved in two branching networks, and W then refers to and will learn The parameter for the linear classifier practised, in additionIt then refers to Classification Loss, selects to intersect entropy function herein as meter The method for calculating Classification Loss;In order to avoid over-fitting, l is used2Norm is to limit all parameters and by η Weighted Constraint;It is logical It crosses back-propagation algorithm to optimize above-mentioned loss function, to obtain corresponding network parameter θvAnd θs;Joined Number θvAnd θsLater, test samplePrediction label be expressed as,
Wherein, z is expressed as the semantic description information of label y;
Step 3: given test sample, predicts the test sample collection according to the embedded space that step 2 learns first Pseudo label, later, according to the pseudo label of generation and image-semantic difference, i.e.,Select test The M test sample closest with the pseudo label in sample set, wherein M=40, and by hand by the sample selected and imparting Its pseudo label is merged into training set as new training dataIn, the training set that is expanded
Step 4: after obtaining trained mapping network and classifier, in order to avoid the depth embedded space to be learnt The phenomenon that prediction label of unknown sample is to known sample label bias is generated, is asked using automatic adjusument model to solve this Topic;New optimization object function indicates are as follows:
Wherein, C indicates unknown classification number,Indicate i-th of selected test sample,WithThen respectively indicate expansion instruction Practice collectionIn corresponding pseudo label and generic semantic description.
CN201910740748.8A 2019-08-12 2019-08-12 Zero sample learning method based on deep embedding space Active CN110516718B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910740748.8A CN110516718B (en) 2019-08-12 2019-08-12 Zero sample learning method based on deep embedding space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910740748.8A CN110516718B (en) 2019-08-12 2019-08-12 Zero sample learning method based on deep embedding space

Publications (2)

Publication Number Publication Date
CN110516718A true CN110516718A (en) 2019-11-29
CN110516718B CN110516718B (en) 2023-03-24

Family

ID=68625047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910740748.8A Active CN110516718B (en) 2019-08-12 2019-08-12 Zero sample learning method based on deep embedding space

Country Status (1)

Country Link
CN (1) CN110516718B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126576A (en) * 2020-03-26 2020-05-08 北京精诊医疗科技有限公司 Novel training strategy for deep learning
CN111461025A (en) * 2020-04-02 2020-07-28 同济大学 Signal identification method for self-evolving zero-sample learning
CN111553378A (en) * 2020-03-16 2020-08-18 北京达佳互联信息技术有限公司 Image classification model training method and device, electronic equipment and computer readable storage medium
CN111797910A (en) * 2020-06-22 2020-10-20 浙江大学 Multi-dimensional label prediction method based on average partial Hamming loss
CN112380374A (en) * 2020-10-23 2021-02-19 华南理工大学 Zero sample image classification method based on semantic expansion
CN112651403A (en) * 2020-12-02 2021-04-13 浙江大学 Zero-sample visual question-answering method based on semantic embedding
CN112686318A (en) * 2020-12-31 2021-04-20 广东石油化工学院 Zero sample learning mechanism based on spherical embedding, spherical alignment and spherical calibration
CN113283514A (en) * 2021-05-31 2021-08-20 高新兴科技集团股份有限公司 Unknown class classification method, device and medium based on deep learning
CN114092747A (en) * 2021-11-30 2022-02-25 南通大学 Small sample image classification method based on depth element metric model mutual learning
CN114241260A (en) * 2021-12-14 2022-03-25 四川大学 Open set target detection and identification method based on deep neural network
CN114861670A (en) * 2022-07-07 2022-08-05 浙江一山智慧医疗研究有限公司 Entity identification method, device and application for learning unknown label based on known label
CN114998613A (en) * 2022-06-24 2022-09-02 安徽工业大学 Multi-label zero sample learning method based on deep mutual learning
CN116433977A (en) * 2023-04-18 2023-07-14 国网智能电网研究院有限公司 Unknown class image classification method, unknown class image classification device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018188240A1 (en) * 2017-04-10 2018-10-18 北京大学深圳研究生院 Cross-media retrieval method based on deep semantic space
CN108846412A (en) * 2018-05-08 2018-11-20 复旦大学 A kind of method of extensive zero sample learning
CN108875818A (en) * 2018-06-06 2018-11-23 西安交通大学 Based on variation from code machine and confrontation network integration zero sample image classification method
WO2019136946A1 (en) * 2018-01-15 2019-07-18 中山大学 Deep learning-based weakly supervised salient object detection method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018188240A1 (en) * 2017-04-10 2018-10-18 北京大学深圳研究生院 Cross-media retrieval method based on deep semantic space
WO2019136946A1 (en) * 2018-01-15 2019-07-18 中山大学 Deep learning-based weakly supervised salient object detection method and system
CN108846412A (en) * 2018-05-08 2018-11-20 复旦大学 A kind of method of extensive zero sample learning
CN108875818A (en) * 2018-06-06 2018-11-23 西安交通大学 Based on variation from code machine and confrontation network integration zero sample image classification method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
秦牧轩等: "基于公共空间嵌入的端到端深度零样本学习", 《计算机技术与发展》 *
陈祥凤等: "度量学习改进语义自编码零样本分类算法", 《北京邮电大学学报》 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553378A (en) * 2020-03-16 2020-08-18 北京达佳互联信息技术有限公司 Image classification model training method and device, electronic equipment and computer readable storage medium
CN111553378B (en) * 2020-03-16 2024-02-20 北京达佳互联信息技术有限公司 Image classification model training method, device, electronic equipment and computer readable storage medium
CN111126576B (en) * 2020-03-26 2020-09-01 北京精诊医疗科技有限公司 Deep learning training method
CN111126576A (en) * 2020-03-26 2020-05-08 北京精诊医疗科技有限公司 Novel training strategy for deep learning
CN111461025B (en) * 2020-04-02 2022-07-05 同济大学 Signal identification method for self-evolving zero-sample learning
CN111461025A (en) * 2020-04-02 2020-07-28 同济大学 Signal identification method for self-evolving zero-sample learning
CN111797910A (en) * 2020-06-22 2020-10-20 浙江大学 Multi-dimensional label prediction method based on average partial Hamming loss
CN111797910B (en) * 2020-06-22 2023-04-07 浙江大学 Multi-dimensional label prediction method based on average partial Hamming loss
CN112380374B (en) * 2020-10-23 2022-11-18 华南理工大学 Zero sample image classification method based on semantic expansion
CN112380374A (en) * 2020-10-23 2021-02-19 华南理工大学 Zero sample image classification method based on semantic expansion
CN112651403B (en) * 2020-12-02 2022-09-06 浙江大学 Zero-sample visual question-answering method based on semantic embedding
CN112651403A (en) * 2020-12-02 2021-04-13 浙江大学 Zero-sample visual question-answering method based on semantic embedding
CN112686318B (en) * 2020-12-31 2023-08-29 广东石油化工学院 Zero sample learning mechanism based on sphere embedding, sphere alignment and sphere calibration
CN112686318A (en) * 2020-12-31 2021-04-20 广东石油化工学院 Zero sample learning mechanism based on spherical embedding, spherical alignment and spherical calibration
CN113283514A (en) * 2021-05-31 2021-08-20 高新兴科技集团股份有限公司 Unknown class classification method, device and medium based on deep learning
CN113283514B (en) * 2021-05-31 2024-05-21 高新兴科技集团股份有限公司 Unknown class classification method, device and medium based on deep learning
CN114092747A (en) * 2021-11-30 2022-02-25 南通大学 Small sample image classification method based on depth element metric model mutual learning
CN114241260A (en) * 2021-12-14 2022-03-25 四川大学 Open set target detection and identification method based on deep neural network
CN114998613A (en) * 2022-06-24 2022-09-02 安徽工业大学 Multi-label zero sample learning method based on deep mutual learning
CN114998613B (en) * 2022-06-24 2024-04-26 安徽工业大学 Multi-mark zero sample learning method based on deep mutual learning
CN114861670A (en) * 2022-07-07 2022-08-05 浙江一山智慧医疗研究有限公司 Entity identification method, device and application for learning unknown label based on known label
CN116433977A (en) * 2023-04-18 2023-07-14 国网智能电网研究院有限公司 Unknown class image classification method, unknown class image classification device, computer equipment and storage medium
CN116433977B (en) * 2023-04-18 2023-12-05 国网智能电网研究院有限公司 Unknown class image classification method, unknown class image classification device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN110516718B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN110516718A (en) The zero sample learning method based on depth embedded space
CN114241282B (en) Knowledge distillation-based edge equipment scene recognition method and device
CN108416394B (en) Multi-target detection model building method based on convolutional neural networks
Liu et al. Ssd: Single shot multibox detector
CN106874840B (en) Vehicle information recognition method and device
CN107945204B (en) Pixel-level image matting method based on generation countermeasure network
CN111507469B (en) Method and device for optimizing super parameters of automatic labeling device
CN107885853A (en) A kind of combined type file classification method based on deep learning
CN110147777B (en) Insulator category detection method based on deep migration learning
JP6932395B2 (en) A method for automatically evaluating the labeling reliability of a training image for use in a deep learning network for analyzing an image, and a reliability evaluation device using this method.
JP2020123330A (en) Method for acquiring sample image for label acceptance inspection from among auto-labeled images utilized for neural network learning, and sample image acquisition device utilizing the same
EP3349152A1 (en) Classifying data
JP2020038667A (en) Method and device for generating image data set for cnn learning for detection of obstacle in autonomous driving circumstances and test method and test device using the same
CN109190646B (en) A kind of data predication method neural network based, device and nerve network system
CN114419323B (en) Cross-modal learning and domain self-adaptive RGBD image semantic segmentation method
CN111144364A (en) Twin network target tracking method based on channel attention updating mechanism
CN111079847A (en) Remote sensing image automatic labeling method based on deep learning
CN110309875A (en) A kind of zero sample object classification method based on the synthesis of pseudo- sample characteristics
CN114724021B (en) Data identification method and device, storage medium and electronic device
CN115564801A (en) Attention-based single target tracking method
CN109697236A (en) A kind of multi-medium data match information processing method
CN116011507A (en) Rare fault diagnosis method for fusion element learning and graph neural network
CN110674845B (en) Dish identification method combining multi-receptive-field attention and characteristic recalibration
CN117516937A (en) Rolling bearing unknown fault detection method based on multi-mode feature fusion enhancement
CN115294176A (en) Double-light multi-model long-time target tracking method and system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant