CN112465016A - Partial multi-mark learning method based on optimal distance between two adjacent marks - Google Patents

Partial multi-mark learning method based on optimal distance between two adjacent marks Download PDF

Info

Publication number
CN112465016A
CN112465016A CN202011346614.7A CN202011346614A CN112465016A CN 112465016 A CN112465016 A CN 112465016A CN 202011346614 A CN202011346614 A CN 202011346614A CN 112465016 A CN112465016 A CN 112465016A
Authority
CN
China
Prior art keywords
confidence
matrix
mark
similarity
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011346614.7A
Other languages
Chinese (zh)
Inventor
王洋洋
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime University filed Critical Shanghai Maritime University
Priority to CN202011346614.7A priority Critical patent/CN112465016A/en
Publication of CN112465016A publication Critical patent/CN112465016A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a partial multi-mark learning method based on optimal distance between two adjacent marks. In particular, assuming the data has smoothness, an example where the feature spaces are similar, its true labels should also have similarity. Firstly, defining a similarity matrix based on the similarity of example feature space to measure the similarity of examples, initializing a confidence matrix by using the information of a candidate mark set, and continuously updating the confidence matrix in an iterative propagation mode. And extracting a credible mark set by applying an approximate ideal solution sorting method according to the optimal and bad fitting mode until the influence of noise marks in the candidate mark set is removed, realizing disambiguation of the candidate mark set, and finally training and predicting the part of multi-mark learning after disambiguation by applying a mature multi-mark learning algorithm. Therefore, the problem that the training stage is easily influenced by noise marks and the multi-mark learning method cannot be directly applied to part of multi-mark learning is solved.

Description

Partial multi-mark learning method based on optimal distance between two adjacent marks
Technical Field
The invention relates to a disambiguation method for partial multi-label learning, in particular to a situation that one example corresponds to one candidate label set, and more than one real label in the set also comprises a noise label.
Background
Traditional strong supervised learning assumes that the samples in the training set are sufficiently labeled and that the labeling of each object is single, unambiguous. However, with the development of the internet, the obtained data is increasing continuously, the task of classifying and organizing the data becomes more and more important, in the modern classification problem, the traditional strong supervision information cannot meet the requirement of classification, and we often encounter that one object is not only related to one class, but also related to other classes, so that Multi-label Learning (MLL) is developed, but the Multi-label Learning often needs to accurately label all related labels for each training example, and in practice, due to the limitation of data resources, real labels are often difficult to obtain, the cost of accurate labeling is high, and other factors such as difficulty in obtaining clearly labeled objects and more weak supervision Learning scenes. As described in the document "A brief introduction to week super subjective learning" by Zhi-Hua Zhou, the current weakly supervised learning is roughly divided into three main categories: incomplete supervision (incomplete supervision), inexact supervision (inexact supervision), and inaccurate supervision (inacurate supervision). Partial Multi-label Learning (PML) belongs to a class of inaccurate monitoring in weak supervised Learning, and PML is a framework which has been proposed recently and is applied to some scenes such as crowd sourcing image tagging (crowdsource image tagging), music emotion recognition, text classification and the like which are difficult to obtain accurate monitoring information from collected data. According to incomplete statistics, the keyword of 'Partial Multi-label' appearing in the title appears more than 10 pieces in the first-class machine learning conference of KDD, AAAI, IJCAI, ICDM and the like since the last three years (2018-2020). PML is closely linked to the currently popular MLL and the Learning framework of bias label Learning (PLL). Where MLL processing allows an instance to be associated with multiple tokens (classes) with the correct semantics at the same time, the goal of the learning system is to generalize the prediction model from the training set, with the resulting model being the unseen instance prediction-related token set. PLL is also a weakly supervised learning framework that handles the case where an example is associated with a set of candidate labels, but there is only one true label for this example in this set of candidate labels. Under the framework of PML learning, we assign a rough candidate tag set to each example, and this candidate tag set has the following characteristics: a) the candidate mark set has at least one mark as a true (correlation) mark of this example, and the rest marks as noise (pseudo) marks; b) markers not in the candidate marker set are all unrelated markers; c) the number of true markers in the candidate marker set is unknown.
From the above we can see that PML is a combination of MLL and PLL, and when the candidate marker sets corresponding to the examples are all considered as true markers, PML is degenerated to MLL; if only one of the candidate mark sets is marked as a true mark and the others are marked as noise, the PML is degenerated to be a PLL.
Problems that exist at present
Although the cost of marking is reduced by acquiring partial multi-mark data in the PML learning process, the candidate mark set not only contains real marks, but also is doped with some noise marks. If the MLL learning method is used to directly handle the PML problem, all the labels in the candidate label set are regarded as true labels, and thus the noise labels in the candidate label set will be affected in the training stage. Since the PML task is an example that requires multiple labels from the training example, rather than a single label example, and the number of true labels in the candidate label set is unknown, the PLL approach cannot be used directly in the PML problem. How to identify the real mark in the candidate mark set and reduce the influence of noise mark pollution becomes a problem to be solved.
Compared with the prior art, the invention has the advantages of
In the invention, in the current part of multi-label learning, the similarity is calculated by using a distance mode, the similarity of examples is mainly reflected by the difference of characteristic values of the examples, but the similarity in the directions of characteristic vectors of the two examples is not considered, and a modified cosine similarity calculation method is introduced for the first time to estimate the similarity degree between the examples. The method not only inherits the advantage of original cosine similarity, but also measures the similarity of examples from the difference in the characteristic direction; meanwhile, the defect that the cosine similarity is insensitive to the characteristic value is improved by a mode of decentralizing the characteristic value of the example, namely, subtracting an average value from each characteristic value of the example.
The present invention sets a confidence level for each token of each training example in the training set, i.e., the likelihood that the token becomes a true token for that example. Although the concept of confidence is not addressed by the present invention, existing methods for confidence do not provide good further analysis of the resulting confidence matrix. The invention applies confidence to PML learning and further analyzes the information of the confidence matrix by using a TOPSIS method. An optimal confidence vector and a worst confidence vector are first constructed from the confidence matrix. A fitness function is then defined from which the fitness of each mark in the example, i.e., the degree of fit of the mark to the best mark, and the difference from the worst mark, is calculated. And finally, extracting a credible mark set from the candidate mark set by using a fitting degree function to realize the denoising of the candidate mark set.
Disclosure of Invention
Based on the fact that the training phase mentioned in the background is susceptible to noise labeling, the method of MLL cannot be directly applied to the case of PML learning, and the present invention assumes that the data has smoothness, i.e., examples with similar feature spaces, and its true labels should also have similarities. A Similarity matrix is defined based on the Similarity of the example feature space to measure the Similarity of the examples, and the traditional Similarity measurement methods such as Euclidean distance measurement and Cosine Similarity (Cosine Similarity) measurement are insufficient. For example, the euclidean distance mainly represents the similarity of the examples from the difference in the characteristic values of the examples, but does not consider the similarity in the directions of the characteristic vectors of the two examples, and the cosine similarity measures the similarity of the examples from the difference in the characteristic directions more, but is insensitive to the characteristic values. Aiming at the problems, in part of multi-mark learning, the invention adopts the corrected cosine value of the included angle as the similarity to construct a similarity matrix. And initializing the confidence coefficient matrix by using the information of the candidate mark set, and continuously updating the confidence coefficient matrix in an iterative propagation mode. And extracting the credible mark set according to the optimal and inferior fitting mode by applying an approximate Ideal Solution sorting method (TOPSIS). The influence of noise marks in the candidate mark set is removed, and the problem that the MLL method cannot be directly applied to PML is solved.
Referring to fig. 1, a partial multi-label learning method for extracting credible labels based on best-bad distance comprises the following main steps:
step S1: and (5) calculating the similarity of the example and k neighbors thereof by using the modified cosine similarity formula (1), thereby initializing the weighted similarity graph G (V, E, W).
Step S2: initializing a confidence matrix F from example candidate token information(0)Normalizing the similarity matrix obtained in the step S1 to obtain H, and performing iterative update by using an iterative update formula (5) to obtain a final confidence matrix F*
Step S3: for the final confidence matrix F obtained in step S2*Further analysis, the two formulas (7) and (8) are defined according to TOPSIS thought. Extracting optimal confidence coefficient vector F and worst confidence coefficient vector F from the confidence coefficient matrix according to the formulas (7) and (8)+、F-Each example is given with F+、F-Comparing, calculating the optimal distance and the worst distance matrix by the two formulas (9) and (10)
Figure BDA0002800013970000041
Step S4: using the optimal worst distance matrix from step S3
Figure BDA0002800013970000042
A fitness function is defined according to the TOPSIS idea as equation (11), a fitness is calculated for each token of each example, and a set of credible tokens is extracted for each example according to the fitness.
Step S5: establishing a new training set D by using the credible mark set obtained in the step S4*The prediction mark is not shown in the example by the ML-kNN algorithm.
Drawings
FIG. 1 is a main flow chart of the present invention based on the best-and-bad distance partial multi-label learning
FIG. 2 is a flow chart for iteratively propagating updated tag confidence matrices in step S2
Detailed Description
The main task of the method is to determine a confidence coefficient for each mark based on a mark iterative propagation algorithm of a graph to obtain a confidence coefficient matrix, define a fit degree function by using information of the confidence coefficient matrix, calculate the fit degree for each example candidate mark, and extract a credible mark set from the candidate mark set by using the fit degree to eliminate the influence of noise marks, so that the MLL method can be applied to partial multi-mark learning.
Step S1: and (5) calculating the similarity of the example and k neighbors thereof by using the modified cosine similarity formula (1), thereby initializing the weighted similarity graph G (V, E, W). The specific implementation steps are as follows:
step S1.1: a weighted directed graph is initialized according to a given training set.
Given a training set for a given PML
Figure BDA0002800013970000051
Wherein xi=(xi,1xi,2,…,xi,d)TE X is an example of a d-dimensional feature vector,
Figure BDA0002800013970000052
is an example xiAnd (c) corresponding candidate mark sets, and instantiating a weighted directed graph G ═ (V, E, W) based on the kNN algorithm, wherein V ═ { x ═iI 1 ≦ i ≦ m corresponds to the example in the training set,
Figure BDA0002800013970000053
is a set of edges that are to be considered,
Figure BDA0002800013970000054
representing example xiK nearest neighbor examples of (a), Wm×mTo weight the matrix, each element is initialized to 0.
Step S1.2: is a weight matrix Wm×mAnd (7) assigning values.
By considering only xiAnd its neighbor examples
Figure BDA0002800013970000055
The cosine value of the corrected included angle is taken as the phaseThe similarity is given to each element of the weight matrix as shown in the following formula (1):
Figure BDA0002800013970000056
as can be seen from the formula (1), for
Figure BDA0002800013970000057
Example of (1) We give it and example xiHas a similarity of 0, wherein Sim in the formula (1)<xi,xj>Defined as the following formula (2):
Figure BDA0002800013970000061
step S2: initializing a confidence matrix F from example candidate token information(0)Normalizing the similarity matrix obtained in the step S1 to obtain H, and performing iterative update by using an iterative update formula (5) to obtain a final confidence matrix F*. Referring to fig. 2, the step S2 is implemented as follows:
step S2.1: for similarity matrix Wm×mAnd (6) normalizing.
In order to facilitate the subsequent iterative label propagation process, the similarity matrix W is firstly used before iterationm×mNormalized by columns to a matrix H, each element of which is defined by the following formula (3):
Figure BDA0002800013970000062
step S2.2: a confidence matrix is initialized.
Defining confidence matrix F ═ Fi,c]m×qEach term f thereofi,c≧ 0 represents the label ycAs example xiThe confidence of the true label. Initializing a confidence matrix before iterative propagation, and recording the initial confidence matrix as; f(0). In order to make the initial mark confidence distributed on the candidate mark set, making the confidence of each mark in the candidate mark setIs measured as
Figure BDA0002800013970000063
The confidence of the labels out of the candidate label set is set as 0, and each item is specifically defined as the following formula (4):
Figure BDA0002800013970000064
step S2.3: and (5) iteratively propagating and updating the confidence coefficient matrix until iteration is terminated.
Using the current confidence matrix F(t)And an initial confidence matrix F(0)Continuously iteratively updating through the following formula (5) to obtain the next confidence matrix
Figure BDA0002800013970000065
Figure BDA0002800013970000066
Information F of confidence matrix of current mark in each mark propagation process(t)And confidence information F of tag initialization(0)Matrix to updated confidence
Figure BDA0002800013970000067
Is influenced by alpha ∈ [0, 1 ]]Controlling, wherein if more initialization confidence information of the example is considered in each iteration propagation process, the smaller alpha is; if more label information of its neighboring examples is considered in each iteration, the larger the value of α is. The tokens of each instance influence neighboring instances by their similarity to neighbors, and each instance updates its token confidence level according to the token confidence levels of its k neighbor instances.
Step S2.4: and normalizing the new confidence matrix obtained in the step S2.3 after each iteration.
After each iteration propagation, obtaining a confidence coefficient matrix
Figure BDA0002800013970000071
Normalized to F(t+1)
Figure BDA0002800013970000072
Step S2.5: and judging whether an iteration termination condition is met, namely when a preset iteration number or no obvious difference exists between two continuous marking confidence matrixes, finishing the iterative marking propagation process.
Step S3: for the final confidence matrix F obtained in step S2*Further analyzing, defining two formulas (7) and (8) according to TOPSIS thought, and extracting optimal confidence coefficient vector F and worst confidence coefficient vector F from confidence coefficient matrix according to the two formulas (7) and (8)+、F-Each example is given with F+、F-Comparing, calculating the optimal distance and the worst distance matrix by the two formulas (9) and (10)
Figure BDA0002800013970000073
The specific implementation steps are as follows:
step S3.1: according to a final confidence coefficient matrix F obtained after iteration is finished*Defining an optimal confidence vector F+And the worst confidence vector F-Specifically, the following two formulae (7) and (8) are defined:
Figure BDA0002800013970000074
Figure BDA0002800013970000075
wherein F+Component (b) of
Figure BDA00028000139700000710
Representing example xiOptimal confidence in row i of the confidence matrix, i.e., example xiOptimal confidence in the candidate label set, F-Component (b) of
Figure BDA00028000139700000711
Representing example xiWorst confidence in row i of confidence matrix, i.e., example xiThe worst confidence in the candidate label set.
Step S3.2: defining an optimal distance matrix based on the optimal and worst confidence vectors and the token confidence vectors of the instances themselves
Figure BDA0002800013970000076
And worst distance matrix
Figure BDA0002800013970000077
Its ith row and jth column element
Figure BDA0002800013970000078
Are respectively defined as follows:
Figure BDA0002800013970000079
Figure BDA0002800013970000081
elements of the optimal distance matrix
Figure BDA0002800013970000082
Representing example xiMark y ofcConfidence of (2) with example xiThe difference of the optimal confidence degrees of the marks in the candidate mark set is better when the value is smaller, and the mark y is showncThe smaller the difference from the ideal confidence, the greater the likelihood of becoming a true mark. Elements of the worst distance matrix in the same way
Figure BDA0002800013970000083
Representing example xiMark y ofcConfidence of (2) with example xiThe greater the difference in the worst confidence of the tokens in the candidate token set, the better the value, indicating token ycThe greater the difference from the least desirable confidence, the likelihood of being a true markThe greater the sex.
Step S4: using the optimal worst distance matrix from step S3.2
Figure BDA0002800013970000084
Defining a fitness function as a formula (11) according to the TOPSIS thought, calculating a fitness value for each mark of each example, and extracting a credible mark set for each example according to the fitness value, wherein the method comprises the following specific implementation steps:
step S4.1: example corresponding fit vectors are calculated.
Defining a fit degree function by using the optimal distance matrix and the worst distance matrix obtained in the step S3.2
Figure BDA0002800013970000085
Using this function as example x in the training setiIs marked with a label yc∈YiCalculating a fit value
Figure BDA0002800013970000086
Figure BDA0002800013970000087
Adhesion value
Figure BDA0002800013970000088
Has a value range of [0, 1 ]],
Figure BDA0002800013970000089
The larger the value the better, the closer the value to 1, illustrating example xiMark y ofcMore conforming example xiTrue mark of (2), then xiThe corresponding mark paste vector can be recorded as:
Figure BDA00028000139700000810
step S4.2: and extracting a credible mark set according to the fitting vector.
xiSet of trusted tokens
Figure BDA00028000139700000811
The fitting vector lambda of step S4.1iAnd a threshold Θ, as determined by the following equation (12):
Figure BDA00028000139700000812
xitrusted token set
Figure BDA00028000139700000813
The mark with the attaching degree larger than the threshold theta is formed, and in order to avoid the situation that the credible mark set is empty, the mark with the largest attaching degree is taken as the latter item of the formula and is also added into the credible mark set.
Step S5: by completing the step S4, we implement denoising of the candidate mark set, so that we can use the existing multi-mark algorithm to predict marks for the unseen examples. The method comprises the following steps of establishing a new training set S by using the credible label set obtained in the step S4, and predicting and labeling unseen examples by using an ML-kNN algorithm:
step S5.1: a new training set is constructed.
Constructing a new training set by using the credible mark set obtained in the step S4.2
Figure BDA0002800013970000091
Step S5.2: in the new training set, k neighbors and statistics are determined for each example using ML-kNN
Figure BDA0002800013970000092
After the step S4.2 is completed, the work of denoising the candidate label set is basically realized, at the moment, the MLL algorithm is feasible to be applied to the new training set, and a mature ML-kNN algorithm is selected to complete the prediction work of the unseen examples by utilizing the new training set. Suppose that
Figure BDA0002800013970000093
Represents the new training set D*Example (ii) xiK nearest neighbor examples. Defining statistics:
Figure BDA0002800013970000094
wherein
Figure BDA00028000139700000912
Is an indication function, which takes a value of 1 when pi is true, and 0 otherwise.
Figure BDA0002800013970000095
Representing example x in the New training setiThe k neighbor examples of (a) include a label y in the label setjNumber of neighbor instances.
Step S5.3: ML-KNN Algorithm such as equation (14) using maximum a posteriori probability vs. unseen example xiAnd (3) predicting:
Figure BDA0002800013970000096
wherein
Figure BDA0002800013970000097
Whether x is an expression exampleiWith the mark yjThe event of (2). When b is 1, the expression has, and otherwise does not.
Figure BDA0002800013970000098
Representing example xiAmong the k neighbors of
Figure BDA0002800013970000099
One example has a label yjThe event of (2).
Wherein the prior probability in equation (14) above
Figure BDA00028000139700000910
And conditional probability
Figure BDA00028000139700000911
The prediction can be estimated in advance in the new training set.

Claims (1)

1. A partial multi-mark learning method based on optimal and inferior distances is characterized by comprising the following steps:
s1, constructing a weighted similarity graph G (V, E, W) used for label propagation;
step S1 includes the following steps:
s1.1: for a given PML training set, enabling a weighted directed graph G to be instantiated based on a kNN algorithm, wherein G is (V, E, W);
s1.2: calculating the similarity between the example and the adjacent examples thereof by using the corrected cosine value of the included angle as a similarity matrix Wm×mA value assigned to each element of (1);
s2, obtaining a final confidence matrix through iterative propagation by using the G (V, E, W) and the candidate mark information obtained in the step S1;
step S2 includes the following steps:
s2.1: the similarity matrix W obtained in step S1 is subjected to the following equationm×mNormalization:
Figure FDA0002800013960000011
s2.2: initializing a confidence matrix according to example candidate token set information:
Figure FDA0002800013960000015
s2.3: iteratively propagating an updated confidence matrix using the current confidence and initial confidence matrix information:
Figure FDA0002800013960000013
s2.4: normalizing the confidence matrix obtained by each iteration update in the step S2.3:
Figure FDA0002800013960000016
s2.5: judging whether an iteration termination condition is met, namely whether a preset maximum iteration number or a confidence coefficient matrix obtained by two iterations is different in significance or not is reached, if so, terminating the iteration, otherwise, adding one to the iteration number, and continuing to execute the step S2.3;
step S3, defining a best confidence coefficient vector by using the final confidence coefficient matrix, and calculating to obtain a best distance matrix;
step S3 includes the following steps:
step S3.1: according to a final confidence coefficient matrix F obtained after iteration is finished*Defining an optimal confidence vector F+And the worst confidence vector F-
Step S3.2: defining an optimal distance matrix based on the optimal and worst confidence vectors and the token confidence vectors of the instances themselves
Figure FDA0002800013960000021
And worst distance matrix
Figure FDA0002800013960000022
Step S4, calculating a degree of fitting for each candidate mark of each example by using the information of the optimal-bad distance matrix, so as to extract a credible mark from the candidate mark set;
step S4 includes the following steps:
step S4.1: calculating a degree of fit for each candidate mark of each example to form a degree of fit vector;
step S4.2: extracting a credible mark set from the candidate mark set according to the fitting degree vector;
step S5 and step S4 are completed, denoising of the candidate marker set is achieved, and the existing multi-marker algorithm is used for predicting the marker for the unseen example;
step S5 includes the following steps:
step S5.1: construct new training set
Figure FDA0002800013960000023
Step S5.2: in the new training set, k neighbors and example x in the new training set are determined for each example using ML-kNNiThe k neighbor examples of (a) include a label y in the label setjThe number of neighbor instances of (c);
step S5.3: ML-KNN algorithm for unseen example x using maximum a posteriori probabilityiAnd (6) performing prediction.
CN202011346614.7A 2020-11-25 2020-11-25 Partial multi-mark learning method based on optimal distance between two adjacent marks Withdrawn CN112465016A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011346614.7A CN112465016A (en) 2020-11-25 2020-11-25 Partial multi-mark learning method based on optimal distance between two adjacent marks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011346614.7A CN112465016A (en) 2020-11-25 2020-11-25 Partial multi-mark learning method based on optimal distance between two adjacent marks

Publications (1)

Publication Number Publication Date
CN112465016A true CN112465016A (en) 2021-03-09

Family

ID=74809682

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011346614.7A Withdrawn CN112465016A (en) 2020-11-25 2020-11-25 Partial multi-mark learning method based on optimal distance between two adjacent marks

Country Status (1)

Country Link
CN (1) CN112465016A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298774A (en) * 2021-05-20 2021-08-24 复旦大学 Image segmentation method and device based on dual condition compatible neural network
CN113379037A (en) * 2021-06-28 2021-09-10 东南大学 Multi-label learning method based on supplementary label collaborative training

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298774A (en) * 2021-05-20 2021-08-24 复旦大学 Image segmentation method and device based on dual condition compatible neural network
CN113298774B (en) * 2021-05-20 2022-10-18 复旦大学 Image segmentation method and device based on dual condition compatible neural network
CN113379037A (en) * 2021-06-28 2021-09-10 东南大学 Multi-label learning method based on supplementary label collaborative training
CN113379037B (en) * 2021-06-28 2023-11-10 东南大学 Partial multi-mark learning method based on complementary mark cooperative training

Similar Documents

Publication Publication Date Title
CN111967294B (en) Unsupervised domain self-adaptive pedestrian re-identification method
CN107515895B (en) Visual target retrieval method and system based on target detection
CN113378632A (en) Unsupervised domain pedestrian re-identification algorithm based on pseudo label optimization
Pham et al. The random cluster model for robust geometric fitting
CN111126482B (en) Remote sensing image automatic classification method based on multi-classifier cascade model
CN113326731A (en) Cross-domain pedestrian re-identification algorithm based on momentum network guidance
CN111127364B (en) Image data enhancement strategy selection method and face recognition image data enhancement method
Chen et al. Hard sample mining makes person re-identification more efficient and accurate
CN110728694A (en) Long-term visual target tracking method based on continuous learning
CN112465016A (en) Partial multi-mark learning method based on optimal distance between two adjacent marks
CN116910571B (en) Open-domain adaptation method and system based on prototype comparison learning
CN113920472A (en) Unsupervised target re-identification method and system based on attention mechanism
CN111815582B (en) Two-dimensional code region detection method for improving background priori and foreground priori
CN113065409A (en) Unsupervised pedestrian re-identification method based on camera distribution difference alignment constraint
CN115470834A (en) Multi-label learning algorithm for correcting inaccurate labels of label confidence degree based on label propagation
CN113409335B (en) Image segmentation method based on strong and weak joint semi-supervised intuitive fuzzy clustering
CN114266321A (en) Weak supervision fuzzy clustering algorithm based on unconstrained prior information mode
CN113128410A (en) Weak supervision pedestrian re-identification method based on track association learning
CN116630694A (en) Target classification method and system for partial multi-label images and electronic equipment
Raj et al. Deep manifold clustering based optimal pseudo pose representation (dmc-oppr) for unsupervised person re-identification
CN111797903B (en) Multi-mode remote sensing image registration method based on data-driven particle swarm optimization
CN118037738B (en) Asphalt pavement crack pouring adhesive bonding performance detection method and equipment
CN116310463B (en) Remote sensing target classification method for unsupervised learning
Xu et al. Dynamic Hybrid Graph Matching for Unsupervised Video-based Person Re-identification
Sheng et al. An approach to detecting abnormal vehicle events in complex factors over highway surveillance video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210309

WW01 Invention patent application withdrawn after publication