CN109543731A - A kind of three preferred Semi-Supervised Regression algorithms under self-training frame - Google Patents

A kind of three preferred Semi-Supervised Regression algorithms under self-training frame Download PDF

Info

Publication number
CN109543731A
CN109543731A CN201811330781.5A CN201811330781A CN109543731A CN 109543731 A CN109543731 A CN 109543731A CN 201811330781 A CN201811330781 A CN 201811330781A CN 109543731 A CN109543731 A CN 109543731A
Authority
CN
China
Prior art keywords
exemplar
sample
pseudo label
unlabeled exemplars
collection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811330781.5A
Other languages
Chinese (zh)
Inventor
熊伟丽
程康明
马君霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN201811330781.5A priority Critical patent/CN109543731A/en
Publication of CN109543731A publication Critical patent/CN109543731A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Complex Calculations (AREA)

Abstract

Three preferred Semi-Supervised Regression algorithms under a kind of self-training frame, are related to the technical field of Semi-Supervised Regression algorithm.The present invention is to unlabeled exemplars and has exemplar to screen, and establishes Gaussian process regression model, using the label value of the model prediction unlabeled exemplars collection, obtains pseudo label sample set;The pseudo label sample set of the condition of satisfaction is selected using confidence level judgment criterion;Pseudo label sample set with a high credibility is further selected by confidence level judgement, and update has exemplar collection and unlabeled exemplars collection, again to unlabeled exemplars and there is exemplar to screen and utilize Gaussian process regression model, update threshold value, prediction unlabeled exemplars collection obtains pseudo label sample set, confidence level judgement is carried out to pseudo label sample set, is recycled into self-training, until reaching set cycle-index.The present invention, which realizes, judges the confidence level of pseudo label sample, and introduces self-training frame and improve unlabeled exemplars utilization rate, to improve the prediction effect using model after unlabeled exemplars.

Description

A kind of three preferred Semi-Supervised Regression algorithms under self-training frame
Technical field
It is preferably semi-supervised that the present invention relates to three under the technical field of Semi-Supervised Regression algorithm more particularly to self-training frame The technical field of regression algorithm.
Background technique
Some important quality variables in the industrial process such as chemical industry, metallurgy and fermentation, can not often be surveyed by in-line meter Amount, and there are serious lag by way of the off-line analysis of laboratory, it is therefore desirable to can directly be measured by some Sample data predicts these important quality variables.With the development of science and technology, the hair of especially industrial big data technology Exhibition, unlabeled exemplars are more and more easily a large amount of to be obtained, and has exemplar acquisition cost still very big, is caused in certain industrial process In have that exemplar is seldom, traditional modeling method is difficult to ensure the prediction effect of model when there is exemplar seldom.
In order to solve these problems, research and utilization has exemplar and a large amount of unlabeled exemplars to improve study on a small quantity The semi-supervised learning of energy receives close attention.Currently, there are many research in terms of semi-supervised clustering and semisupervised classification, and partly supervise The research superintended and directed in terms of returning is seldom.The some common methods of Semi-Supervised Regression using the Semi-Supervised Regression of popular study as calculated Method, coorinated training algorithm, semi-supervised support vector regression, selective ensemble algorithm etc..
But when there is exemplar seldom, these methods not can guarantee the quality utilized to unlabeled exemplars.In order to more Sufficiently accurately utilize unlabeled exemplars, it is contemplated that have quite a few to be in unlabeled exemplars can not be by there is label sample on a small quantity This Accurate Prediction, and there is outlier present in exemplar to will affect the prediction effect to unlabeled exemplars on a small quantity, right Unlabeled exemplars and on the basis of having exemplar screening, define a kind of confidence level judgment criterion, realize to pseudo label sample Confidence level judgement, and introduce self-training frame and improve unlabeled exemplars utilization rate, utilize model after unlabeled exemplars to improve Prediction effect.
Summary of the invention
It is high for having exemplar few present in industrial process and obtaining cost, and traditional semi-supervised learning can not be protected The problem of card is fully accurate prediction to unlabeled exemplars, the invention proposes three under a kind of self-training frame preferably semi-supervised time Reduction method.
The purpose of the present invention is what is be achieved through the following technical solutions:
Three preferred Semi-Supervised Regression algorithms under a kind of self-training frame, include the following steps:
Step 1: to unlabeled exemplars and having exemplar to screen, and has exemplar to establish using screening gained high This process regression model f1, utilize model prediction unlabeled exemplars collection M1Label value, obtain pseudo label sample set S1
Unlabeled exemplars screening: a threshold θ is given1, unlabeled exemplars x ' is measured using mahalanobis distanceiAnd have mark
The similarity d of the intensive district center C of signed-off sample sheetiIf x 'iWith at a distance from C be less than θ1, then x 'iMeet optimum condition;
There is exemplar screening: giving a threshold θ2, the similarity d (x between sample is measured using mahalanobis distancei, xj), statistical sample xiWith sample x aroundjMahalanobis distance be less than θ2Sample size m, if m be more than or equal to 2, xiMeet preferred Condition;
Step 2: (using there is exemplar collection to establish model, nothing is predicted to pseudo label sample using confidence level judgment criterion The label value of exemplar collection, resulting sample set are known as pseudo label sample set) confidence level judgement is carried out, select the condition of satisfaction It may believe that pseudo label sample set S2
Step 3: sample with a high credibility is further selected by confidence level judgement;
Confidence level judgment criterion is as follows: giving a threshold θ3, after judging that modeling process is added in each pseudo label sample Influence to forecast result of model, if model is less than threshold θ to test sample prediction variance var3, then the pseudo label sample can Letter has exemplar collection for updating;
Step 4: using may believe that pseudo label sample set S2Update has exemplar collection and unlabeled exemplars collection, then weighs Newly to unlabeled exemplars and there is exemplar screen and utilize Gaussian process regression model, update threshold θ3, predict without label sample This collects to obtain pseudo label sample set, carries out confidence level judgement to pseudo label sample set, thus enters self-training and recycles, until reaching institute Set cycle-index P.
The present invention considers there is quite a few in unlabeled exemplars to more sufficiently accurately utilize unlabeled exemplars There can not be outlier present in exemplar to will affect to no mark by there is exemplar Accurate Prediction on a small quantity, and on a small quantity The prediction effect of signed-off sample sheet defines a kind of confidence level judgement on the basis of to unlabeled exemplars with there is exemplar screening Criterion is realized and is judged the confidence level of pseudo label sample, and introduces self-training frame and improve unlabeled exemplars utilization rate, to improve Utilize the prediction effect of model after unlabeled exemplars.
The present invention guarantees the similitude of two class data, by screening to unlabeled exemplars with there is exemplar to mention The accuracy of high unlabeled exemplars prediction;Then, unlabeled exemplars forecast confidence is judged, it is high filters out confidence level Sample initially has exemplar collection for updating;Finally, ensure that the accurate of unlabeled exemplars prediction in above three preferred process Property on the basis of, in order to further increase the adequacy that unlabeled exemplars utilize, introduce self-training frame, pass through multiple cycle sieve Choosing improves the utilization rate of unlabeled exemplars.
Detailed description of the invention
Fig. 1 is overall algorithm flow chart.
Fig. 2 has the histogram of exemplar and unlabeled exemplars.
Longitudinal comparison diagram of Fig. 3 distinct methods.
The prediction error comparison diagram of Fig. 4 distinct methods.
The histogram of Fig. 5 a variety of method predicted values and true value.
Specific embodiment
Below with reference to shown in Fig. 1, the present invention is further described:
Three preferred Semi-Supervised Regression algorithms under a kind of self-training frame, include the following steps:
Step 1: to unlabeled exemplars and having exemplar to screen, and has exemplar to establish using screening gained high This process regression model f1, utilize model prediction unlabeled exemplars collection M1Label value, obtain pseudo label sample set S1
Unlabeled exemplars screening: a threshold θ is given1, unlabeled exemplars x ' is measured using mahalanobis distanceiAnd have label The similarity d of the intensive district center C of sampleiIf x 'iWith at a distance from C be less than θ1, then x 'iMeet optimum condition;
There is exemplar screening: giving a threshold θ2, the similarity d (x between sample is measured using mahalanobis distancei, xj), statistical sample xiWith sample x aroundjMahalanobis distance be less than θ2Sample size m, if m be more than or equal to 2, xiMeet preferred Condition;
Step 2: (using there is exemplar collection to establish model, nothing is predicted to pseudo label sample using confidence level judgment criterion The label value of exemplar collection, resulting sample set are known as pseudo label sample set) confidence level judgement is carried out, select the condition of satisfaction It may believe that pseudo label sample set S2
Step 3: sample with a high credibility is further selected by confidence level judgement;
Confidence level judgment criterion is as follows: giving a threshold θ3, after judging that modeling process is added in each pseudo label sample Influence to forecast result of model, if model is less than threshold θ to test sample prediction variance var3, then the pseudo label sample can Letter has exemplar collection for updating;
Step 4: using may believe that pseudo label sample set S2Update has exemplar collection and unlabeled exemplars collection, then weighs Newly to unlabeled exemplars and there is exemplar screen and utilize Gaussian process regression model, update threshold θ3, predict without label sample This collects to obtain pseudo label sample set, carries out confidence level judgement to pseudo label sample set, thus enters self-training and recycles, until reaching institute Set cycle-index P.
By common chemical process --- for debutanizing tower process.Experimental data from real processes actual samples, Butane concentration is predicted.
Step 1: to unlabeled exemplars and having exemplar to screen, and has exemplar to establish using screening gained high This process returns (GPR) model f1, utilize model prediction unlabeled exemplars collection M1Label value, obtain pseudo label sample set S1
Unlabeled exemplars screening: a threshold θ is given1, unlabeled exemplars x ' is measured using mahalanobis distanceiAnd have label The similarity d of the intensive district center C of sampleiIf x 'iWith at a distance from C be less than θ1, then x 'iMeet optimum condition.Wherein, diBy formula (1)~(3) it obtains.
di=sqrt [(x 'i-C)′S-1(x′i-C)] (1)
S is unlabeled exemplars covariance matrix in formula, and n is unlabeled exemplars number,For unlabeled exemplars mean value, C is Have exemplar compact district sample average gained;
There is exemplar screening: giving a threshold θ2, the similarity d (x between sample is measured using mahalanobis distancei, xj), statistical sample xiWith sample x aroundjMahalanobis distance be less than θ2Sample size m, if m be not less than 2, xiMeet preferred stripe Part.Wherein, d (xi,xj) obtained by formula (4)~(6).
d(xi,xj)=sqrt [(xi-xj)′S-1(xi-xj)] (4)
S is to have exemplar covariance matrix in formula, and n is to have exemplar number,To there is exemplar mean value;
Mahalanobis distance (Mahalanobis distance) is by India's statistician's Mahalanobis (P.C.Mahalanobis) it proposes, indicates the covariance distance of data.It is a kind of effectively two unknown sample collection of calculating Similarity method.
Step 2: (using there is exemplar collection to establish model, nothing is predicted to pseudo label sample using confidence level judgment criterion The label value of exemplar collection, resulting sample set are known as pseudo label sample set) confidence level judgement is carried out, select the condition of satisfaction It may believe that pseudo label sample set S2
Step 3: sample with a high credibility is further selected by confidence level judgement.
Confidence level judgment criterion is as follows: giving a threshold θ3, after judging that modeling process is added in each pseudo label sample Influence to forecast result of model, if model is less than threshold θ to test sample prediction variance var3, then the pseudo label sample can Letter, can be used in updating has exemplar collection.Wherein, θ3It is obtained with var by formula (1)~(7).
fgpr=gprtrain (B1,A1) (7)
ypredict=fgpr(xtest) (8)
(B′1,A′1)=(B1,A1)+S1(j,:) (10)
f′gpr=gprtrain (B '1,A′1) (11)
In formula, B1And A1Respectively there are the auxiliary variable and label value of exemplar collection, xtestAnd ytestRespectively test specimens The auxiliary variable and label value of this collection, ypredictIt is model to xtestPredicted value, B '1With A '1Pseudo label sample is respectively added There are the auxiliary variable and label value of exemplar collection, S afterwards1(j :) indicate pseudo label sample set S1In j-th of pseudo label sample,There is exemplar collection to x to be updatedtestPredicted value;
Pseudo label sample confidence level filtering algorithm is as follows:
Input: pseudo label sample set S1, threshold θ3, selected has exemplar collection [B1,A1];
1): initialization.I is assigned a value of 1, may believe that pseudo label sample set S2It is assigned to empty set;
2): calculating to obtain θ by formula (7), (8), (9)3
3): taking out pseudo label sample set S1Middle sample xi
4): var is calculated to obtain by formula (10) (11) (12) (13);
5): whether meet preferred criteria 3, if satisfied, then jump procedure 5, otherwise, jump procedure 6;
6): by xiDeposit may believe that pseudo label sample set S2
7): i=i+1, if take pseudo label sample set S1Middle sample jumps out circulation, otherwise, jump procedure if taking 2;
Output: it may believe that pseudo label sample set S2
Step 4: using may believe that pseudo label sample set S2Update has exemplar collection and unlabeled exemplars collection, then weighs Newly to unlabeled exemplars and there is exemplar screen and utilize Gaussian process regression model modeling, update threshold θ3, predict without mark Label sample set obtains pseudo label sample set, carries out confidence level judgement to pseudo label sample set, thus enters self-training and recycles, until reaching To set cycle-index P.
Gaussian process regression model is a kind of nonparametric density estimation based on Statistical Learning Theory, is returned using Gaussian process Return model modeling as follows:
Given training sample set X ∈ RD×NWith y ∈ RN, wherein X={ xi∈RD}I=1 ... N, y={ yi∈R}I=1 ... NIt respectively represents D dimension outputs and inputs data.Relationship between outputting and inputting is generated by formula (14):
Y=f (x)+ε (14)
Wherein f is unknown functional form, and ε is that mean value is 0, and variance isGaussian noise.The input new for one x*, corresponding probabilistic forecasting output y*Also meet shown in Gaussian Profile, mean value and variance such as formula (15) and (16):
y*(x*)=cT(x*)C-1y (15)
C (x in formula*)=[c (x*,x1),…,c(x*,xn)]TIt is the covariance matrix between training data and test data.It is the covariance matrix between training data, I is N × N-dimensional unit matrix.c(x*,x*) it is test number According to auto-covariance.
Gaussian process regression model can choose different covariance function c (xi,xj) covariance matrix Σ is generated, as long as The covariance function of selection can guarantee that the covariance matrix of generation meets the relationship of non-negative positive definite.Gauss covariance letter is selected herein Number:
V controls the measurement of covariance, ω in formuladRepresent each ingredient xdRelative importance.
To the unknown parameter v, ω in formula (17)1,…,ωDAnd Gaussian noise varianceEstimation, it is general simplest Method is exactly to pass through Maximum-likelihood estimation to obtain parameter
In order to acquire the value of parameter θ, random value in different range is set by parameter θ first, is selected in each range One random value, the different magnitudes of range selection here, respectively 0.001,0.01,0.1,1,10 etc., then use conjugate gradient The parameter that method is optimized.After obtaining optimized parameter θ, for test sample x*, GPR mould can be estimated with formula (15) and (16) The output valve of type.
Fig. 2 (a) is the histogram for having exemplar, and Fig. 2 (b) is the histogram of unlabeled exemplars, from theory On confirm when there is exemplar less, information contained amount is not enough to express whole process.Fig. 3, Fig. 4, Fig. 5 are respectively from difference Angle contrast's tracking effect of a variety of methods, demonstrate the superiority of proposed algorithm herein from experimental viewpoint.

Claims (7)

1. three preferred Semi-Supervised Regression algorithms under a kind of self-training frame, it is characterised in that include the following steps:
Step 1: to unlabeled exemplars and having exemplar to screen, and has exemplar to establish Gauss mistake using screening gained Journey regression model f1, utilize model prediction unlabeled exemplars collection M1Label value, obtain pseudo label sample set S1
Unlabeled exemplars screening: a threshold θ is given1, unlabeled exemplars x ' is measured using mahalanobis distanceiAnd have exemplar The similarity d of intensive district center CiIf x 'iWith at a distance from C be less than θ1, then x 'iMeet optimum condition;
There is exemplar screening: giving a threshold θ2, the similarity d (x between sample is measured using mahalanobis distancei,xj), system Count sample xiWith sample x aroundjMahalanobis distance be less than θ2Sample size m, if m be more than or equal to 2, xiMeet optimum condition;
Step 2: confidence level judgement is carried out to pseudo label sample using confidence level judgment criterion, selects may believe that for the condition of satisfaction Pseudo label sample set S2
Step 3: sample with a high credibility is further selected by confidence level judgement;
Confidence level judgment criterion is as follows: giving a threshold θ3, judge that each pseudo label sample is added after modeling process to model The influence of prediction effect, if model is less than threshold θ to test sample prediction variance var3, then the pseudo label sample is credible, is used for Update has exemplar collection;
Step 4: using may believe that pseudo label sample set S2Update has exemplar collection and unlabeled exemplars collection, then again to nothing Exemplar and there is exemplar screen and utilize Gaussian process regression model, update threshold θ3, predict that unlabeled exemplars collection obtains Pseudo label sample set, carries out confidence level judgement to pseudo label sample set, thus enters self-training and recycles, and follows until reaching set Ring number P.
2. three preferred Semi-Supervised Regression algorithms under self-training frame according to claim 1, it is characterised in that above-mentioned step The algorithm that unlabeled exemplars are screened in rapid one are as follows: give a threshold θ1, unlabeled exemplars x ' is measured using mahalanobis distanceiWith There is the similarity d of the intensive district center C of exemplariIf x 'iWith at a distance from C be less than θ1, then x 'iMeet optimum condition;Wherein, di It is obtained by formula (1)~(3);
di=sqrt [(x 'i-C)′S-1(x′i-C)]
(1)
S is unlabeled exemplars covariance matrix in formula, and n is unlabeled exemplars number,For unlabeled exemplars mean value, C is to have mark Obtained by this compact district sample of signed-off sample is averaged.
3. three preferred Semi-Supervised Regression algorithms under self-training frame according to claim 1, it is characterised in that above-mentioned step There is exemplar filtering algorithm as follows in rapid one: giving a threshold θ2, measured using mahalanobis distance similar between sample Spend d (xi,xj), statistical sample xiWith sample x aroundjMahalanobis distance be less than θ2Sample size m, if m be more than or equal to 2, xi Meet optimum condition;Wherein, d (xi,xj) obtained by formula (4)~(6);
d(xi,xj)=sqrt [(xi-xj)′S-1(xi-xj)] (4)
S is to have exemplar covariance matrix in formula, and n is to have exemplar number,To there is exemplar mean value.
4. three preferred Semi-Supervised Regression algorithms under self-training frame according to claim 1, it is characterised in that above-mentioned step Confidence level judgment criterion algorithm in rapid three is as follows: giving a threshold θ3, judge that the addition of each pseudo label sample modeled To the influence of forecast result of model after journey, if model is less than threshold θ to test sample prediction variance var3, then the pseudo label sample Credible, can be used in updating has exemplar collection;Wherein, θ3It is obtained with var by formula (7)~(13);
fgpr=gprtrain (B1,A1) (7)
ypredict=fgpr(xtest) (8)
(B′1,A′1)=(B1,A1)+S1(j,:) (10)
f′gpr=gprtrain (B '1,A′1) (11)
In formula, B1And A1Respectively there are the auxiliary variable and label value of exemplar collection, xtestAnd ytestRespectively test sample collection Auxiliary variable and label value, ypredictIt is model to xtestPredicted value, B '1With A '1Have after pseudo label sample respectively is added The auxiliary variable and label value of exemplar collection, S1(j :) indicate pseudo label sample set S1In j-th of pseudo label sample,There is exemplar collection to x to be updatedtestPredicted value.
5. three preferred Semi-Supervised Regression algorithms under self-training frame according to claim 4, it is characterised in that above-mentioned step It obtains may believe that pseudo label sample set S in rapid four2Algorithm it is as follows:
Input: pseudo label sample set S1, threshold θ3, selected has exemplar collection [B1,A1];
1): initialization.I is assigned a value of 1, may believe that pseudo label sample set S2It is assigned to empty set;
2): calculating to obtain θ by formula (7), (8), (9)3
3): taking out pseudo label sample set S1Middle sample xi
4): var is calculated to obtain by formula (10) (11) (12) (13);
5): whether meeting θ3< var, if satisfied, then jump procedure 6, otherwise, jump procedure 7;
6): by xiDeposit may believe that pseudo label sample set S2
7): i=i+1, if take pseudo label sample set S1Middle sample jumps out circulation if taking, otherwise, jump procedure 2;
Output: thinkable pseudo label sample set S2
6. three preferred Semi-Supervised Regression algorithms under self-training frame according to claim 1, it is characterised in that utilize height This process regression model modeling method is as follows:
Given training sample set X ∈ RD×NWith y ∈ RN, wherein X={ xi∈RD}I=1 ... N, y={ yi∈R}I=1 ... NRespectively represent D dimension Output and input data;Relationship between outputting and inputting is generated by formula (14):
Y=f (x)+ε (14)
Wherein f is unknown functional form, and ε is that mean value is 0, and variance isGaussian noise;The input x new for one*, phase The probabilistic forecasting output y answered*Also meet shown in Gaussian Profile, mean value and variance such as formula (15) and (16):
y*(x*)=cT(x*)C-1y (15)
C (x in formula*)=[c (x*,x1),…,c(x*,xn)]TIt is the covariance matrix between training data and test data,It is the covariance matrix between training data, I is N × N-dimensional unit matrix, c (x*,x*) it is test The auto-covariance of data.
7. three preferred Semi-Supervised Regression algorithms under self-training frame according to claim 6, it is characterised in that assisted Covariance function c (the x of variance matrixi,xj) are as follows:
V controls the measurement of covariance, ω in formuladRepresent each ingredient xdRelative importance;
To the unknown parameter v, ω in formula (17)1,…,ωDAnd Gaussian noise varianceEstimation, obtained by Maximum-likelihood estimation To parameter
Random value in different range is set by parameter θ first, a random value is selected in each range, range choosing here With different magnitudes, respectively 0.001,0.01,0.1,1,10, the parameter then optimized with conjugate gradient method;It obtains optimal After parameter θ, for test sample x*, the output valve of GPR model can be estimated with formula (15) and (16).
CN201811330781.5A 2018-11-09 2018-11-09 A kind of three preferred Semi-Supervised Regression algorithms under self-training frame Pending CN109543731A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811330781.5A CN109543731A (en) 2018-11-09 2018-11-09 A kind of three preferred Semi-Supervised Regression algorithms under self-training frame

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811330781.5A CN109543731A (en) 2018-11-09 2018-11-09 A kind of three preferred Semi-Supervised Regression algorithms under self-training frame

Publications (1)

Publication Number Publication Date
CN109543731A true CN109543731A (en) 2019-03-29

Family

ID=65846674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811330781.5A Pending CN109543731A (en) 2018-11-09 2018-11-09 A kind of three preferred Semi-Supervised Regression algorithms under self-training frame

Country Status (1)

Country Link
CN (1) CN109543731A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533251A (en) * 2019-09-03 2019-12-03 北京天泽智云科技有限公司 Promote the method and device of predictive maintenance model adaptability
CN111161848A (en) * 2019-10-31 2020-05-15 杭州深睿博联科技有限公司 Method and device for marking focus of CT image and storage medium
CN112581472A (en) * 2021-01-26 2021-03-30 中国人民解放军国防科技大学 Target surface defect detection method facing human-computer interaction
CN112749841A (en) * 2020-12-30 2021-05-04 科大国创云网科技有限公司 User public praise prediction method and system based on self-training learning
CN113065609A (en) * 2021-04-22 2021-07-02 平安国际智慧城市科技股份有限公司 Image classification method and device, electronic equipment and readable storage medium
CN113158554A (en) * 2021-03-25 2021-07-23 腾讯科技(深圳)有限公司 Model optimization method and device, computer equipment and storage medium
WO2022141094A1 (en) * 2020-12-29 2022-07-07 深圳市大疆创新科技有限公司 Model generation method and apparatus, image processing method and apparatus, and readable storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533251A (en) * 2019-09-03 2019-12-03 北京天泽智云科技有限公司 Promote the method and device of predictive maintenance model adaptability
CN111161848A (en) * 2019-10-31 2020-05-15 杭州深睿博联科技有限公司 Method and device for marking focus of CT image and storage medium
CN111161848B (en) * 2019-10-31 2023-08-29 杭州深睿博联科技有限公司 Focus marking method and device for CT image and storage medium
WO2022141094A1 (en) * 2020-12-29 2022-07-07 深圳市大疆创新科技有限公司 Model generation method and apparatus, image processing method and apparatus, and readable storage medium
CN112749841A (en) * 2020-12-30 2021-05-04 科大国创云网科技有限公司 User public praise prediction method and system based on self-training learning
CN112581472A (en) * 2021-01-26 2021-03-30 中国人民解放军国防科技大学 Target surface defect detection method facing human-computer interaction
CN113158554A (en) * 2021-03-25 2021-07-23 腾讯科技(深圳)有限公司 Model optimization method and device, computer equipment and storage medium
CN113158554B (en) * 2021-03-25 2023-02-14 腾讯科技(深圳)有限公司 Model optimization method and device, computer equipment and storage medium
CN113065609A (en) * 2021-04-22 2021-07-02 平安国际智慧城市科技股份有限公司 Image classification method and device, electronic equipment and readable storage medium
CN113065609B (en) * 2021-04-22 2024-04-09 深圳赛安特技术服务有限公司 Image classification method, device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN109543731A (en) A kind of three preferred Semi-Supervised Regression algorithms under self-training frame
Zhu et al. Carbon price forecasting with a hybrid Arima and least squares support vector machines methodology
CN109063416B (en) Gene expression prediction technique based on LSTM Recognition with Recurrent Neural Network
CN107862173A (en) A kind of lead compound virtual screening method and device
Hung et al. Long-term business cycle forecasting through a potential intuitionistic fuzzy least-squares support vector regression approach
Li et al. Vessel traffic flow forecasting by RSVR with chaotic cloud simulated annealing genetic algorithm and KPCA
CN107451102A (en) A kind of semi-supervised Gaussian process for improving self-training algorithm returns soft-measuring modeling method
CN108764295B (en) Method for predicting concentration of butane at bottom of debutanizer tower based on soft measurement modeling of semi-supervised ensemble learning
Ringle et al. Finite mixture and genetic algorithm segmentation in partial least squares path modeling: identification of multiple segments in complex path models
CN110309871A (en) A kind of semi-supervised learning image classification method based on random resampling
CN111916148B (en) Method for predicting protein interaction
CN110837921A (en) Real estate price prediction research method based on gradient lifting decision tree mixed model
CN107016416B (en) Data classification prediction method based on neighborhood rough set and PCA fusion
Colonnese et al. Protein-protein interaction prediction via graph signal processing
Li et al. Weak edge identification network for ocean front detection
CN108734207A (en) A kind of model prediction method based on double preferred Semi-Supervised Regression algorithms
CN106056146B (en) The visual tracking method that logic-based returns
CN109543922A (en) Prediction technique is also measured for there is stake to share borrowing at times for bicycle website group
CN109615002A (en) Decision tree SVM university student&#39;s consumer behavior evaluation method based on PSO
Cai et al. EST-NAS: An evolutionary strategy with gradient descent for neural architecture search
Fan et al. An improved quantum clustering algorithm with weighted distance based on PSO and research on the prediction of electrical power demand
Ju et al. Hydrologic simulations with artificial neural networks
CN107886126B (en) Aerial engine air passage parameter prediction method and system based on dynamic integrity algorithm
Li et al. A data-driven rutting depth short-time prediction model with metaheuristic optimization for asphalt pavements based on RIOHTrack
Zhang et al. Applicability evaluation of different algorithms for daily reference evapotranspiration model in KBE system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190329

RJ01 Rejection of invention patent application after publication