CN107368842A - A kind of training method and device - Google Patents

A kind of training method and device Download PDF

Info

Publication number
CN107368842A
CN107368842A CN201610313453.9A CN201610313453A CN107368842A CN 107368842 A CN107368842 A CN 107368842A CN 201610313453 A CN201610313453 A CN 201610313453A CN 107368842 A CN107368842 A CN 107368842A
Authority
CN
China
Prior art keywords
sample
training
current
training sample
current training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610313453.9A
Other languages
Chinese (zh)
Inventor
余慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ingenic Semiconductor Co Ltd
Original Assignee
Beijing Ingenic Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ingenic Semiconductor Co Ltd filed Critical Beijing Ingenic Semiconductor Co Ltd
Priority to CN201610313453.9A priority Critical patent/CN107368842A/en
Publication of CN107368842A publication Critical patent/CN107368842A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the present invention provides a kind of training method and device, for reducing the false drop rate of prior art, improves detection efficiency.Methods described includes:Sample in Sample Storehouse is operated, obtains current training sample;Every one-level Weak Classifier is trained using current training sample, finally gives one-level strong classifier;Adjust training sample required when current training sample is trained as next stage strong classifier.It can be seen that by method provided in an embodiment of the present invention, the false drop rate of prior art can be reduced, improves training effectiveness.

Description

A kind of training method and device
Technical field
The present invention relates to communication electronics field, more particularly to a kind of training method and device.
Background technology
At present either to face or it is non-face be identified, typically by carrying out the training of one-level one-level and detection to the positive sample in Sample Storehouse and negative sample, so as to obtain grader and cascade classifier.Its process is as follows:
Positive sample refers to the picture for including examined object, and its size is consistent with training window;
Negative sample is to take out from any image not comprising examined object, therefore the dimension of picture should be greater than training the size of window;Also need uniformly to zoom to negative sample into training window size during hands-on.
Generation grader process be:
Step 11, the Sample Storehouse comprising positive sample and negative sample is selected, and all samples are divided into training sample and test sample two parts;
Step 12, classifier algorithm is performed using training sample, generates grader;
Step 13, the grader is performed in test sample, generates prediction result;
Step 14, according to prediction result, necessary evaluation index is calculated, assesses the performance of grader.
After process by the above-mentioned generation grader of iteration, cascade classifier, i.e. strong classifier can be obtained:
Different graders (Weak Classifier) is trained for same training set, then these weak classifier sets are got up, forms a stronger final classification device (strong classifier).Contain two stages using the saddlebag of cascade classifier:Training and detection.One cascade classifier is obtained by the process of training, then image is detected using the cascade classifier, tested altimetric image passes sequentially through every one-level Weak Classifier, most of region, the negative sample of i.e. tested sniffing are just excluded so in above several layers of detections, and the region all detected by every one-level Weak Classifier is final target area.
Current cascade classifier training obtains negative sample using the method intercepted to same figure with certain step-length movement and positive sample identical size picture and then appropriate scaling intercept the interception for just continuing next until being less than positive sample size again.
The inventors discovered that there are the following problems for prior art:
First, current cascade classifier training when, interception constantly is being zoomed in and out to same Zhang Yuantu, can with Accurate classification in the case of still to intercept and predict repeatedly, this will cause training speed slow;
Second, current cascade classifier excessively causes other environment to be not involved with again same graphic operation, reduces the specific diversity of negative sample in training;
3rd, current cascade classifier specific aim in training is not strong, and the situation that some are easily caused with flase drop does not fully take into account, and it is more to cause to train the grader flase drop come;
4th, for current cascade classifier in training, the requirement for negative sample does not simply include target image, is not bound with actual conditions and specifically considers.
The content of the invention
The embodiment of the present invention provides, and for reducing the false drop rate of prior art, improves detection efficiency.
A kind of training method, methods described include:
Sample in Sample Storehouse is operated, obtains current training sample;
Current class device is trained using current training sample;
The current training sample is detected using the current class device, and the current training sample is adjusted according to testing result, using the training sample after adjustment as the training sample trained next time;
Said process is repeated to obtain being combined into final cascade classifier by N level strong classifiers.
A kind of detection side device, described device include:
Operating unit, for being operated to the sample in Sample Storehouse, obtain current training sample;
Training unit, for being trained using current training sample to current class device;
Adjustment unit, for being detected using the current class device to the current training sample, and the current training sample is adjusted according to testing result, using the training sample after adjustment as the training sample trained next time;Said process is repeated to obtain being combined into final cascade classifier by N level strong classifiers.
It can be seen that by method provided in an embodiment of the present invention, after the cascade classifier that will finally train obtained N level graders to be bonded needed for us, the false drop rate of prior art can be reduced, improves detection efficiency.
Brief description of the drawings
In order to illustrate more clearly of technical scheme of the invention or of the prior art, the required accompanying drawing used in embodiment or description of the prior art will be briefly described below.
Fig. 1 is the method flow diagram for generating grader in the prior art;
Fig. 2 is a kind of method flow diagram of training method provided in an embodiment of the present invention;
Fig. 3 is a kind of trainer structure chart provided in an embodiment of the present invention.
Embodiment
In order that those skilled in the art more fully understand the technical scheme in the embodiment of the present invention, and enable the above objects, features and advantages of the present invention more obvious understandable, technical scheme in the present invention is described in further detail below in conjunction with the accompanying drawings.
The embodiment of the present invention provides a kind of training method, can reduce the false drop rate of prior art, improves detection efficiency.The detailed process of the training method includes:
Step 1: being operated to the sample in Sample Storehouse, current training sample is obtained;
Step 2: current class device is trained using current training sample;
Step 3: being detected using the current class device to the current training sample, and the current training sample is adjusted according to testing result, using the training sample after adjustment as the training sample trained next time;
Said process is repeated to obtain being combined into final cascade classifier by N level strong classifiers.
The sample in Sample Storehouse, which carries out operation, to be included:
The positive sample specified number and negative sample are selected according to predetermined manner, random intercept operation is carried out to the negative sample according to the size of the positive sample;
It is described to obtain current training sample and include:
Using the positive sample of selected number and the negative sample for completing intercept operation as current training sample.
It is described using current training sample current class device is trained including:
The weight increase of the negative sample of mistake will be detected, using the big negative sample of weight as current training sample required during training.
Methods described also includes reducing false alarm rate.
It is introduced below with specific embodiment:
The embodiment of the present invention provides a kind of training method, and this method includes being trained to obtain corresponding grader by preprepared training sample, then detection sample is detected with the grader again, to judge the Detection accuracy of the grader.This method can be used for face is identified, or any Sample Storehouse for distinguishing positive negative sample and material object are identified.Sample Storehouse in the method includes positive sample and negative sample, wherein:
Positive sample is tried one's best using uniformity stronger (angle change is few), the unsuitable excessive conduct positive sample of diversity, in order to avoid influence original uniform characteristics of positive sample;Positive sample is required for unified size in follow-up detection or training process, therefore positive sample is arranged into required size during early-stage preparations, the influence brought to prevent the deformation during subsequent operation and distortion.
For the actual scene demand for coordinating the later stage to use, the picture for the scene (positive sample is not embedded in wherein indisposed) that some are tried one's best and positive sample is relatively likely to occur should be chosen as negative sample, therefrom intercepted during for training, and the size of this picture is greater than the size of training window and meets the ratio of positive sample in the background;
As shown in Fig. 2 its process is as follows:
Step 31, the positive sample specified number and negative sample are selected in Sample Storehouse, and the size of positive and negative samples all needs to meet above-mentioned requirements;
Specifying number in this step can be set according to the actual requirements;
Step 32, according to the positive sample size to the negative sample carry out intercept operation;
Need first to randomly select negative sample in this step, random intercept operation then is carried out to the negative sample, until selecting the negative sample of specified quantity;
After step 31 performs with step 32, using the positive sample of selected number and the negative sample for completing intercept operation as current training sample.
Step 33, use the current Weak Classifier of current training sample generation;
Step 34, using current Weak Classifier current training sample is detected, and increase the weight of wrongheaded negative sample;
Step 35, current training sample is adjusted;Positive sample is sequentially taken out from Sample Storehouse and takes out negative sample at random, and negative sample is operated according to step 32, using the big negative sample of weight as current training sample required during training;
Step 36, using the Weak Classifier trained as current Weak Classifier, repeat step 34 is with step 35 until generating current strong classifier;
Step 37, using the current strong classifier trained the sample in Sample Storehouse detect, increase the weight of wrongheaded negative sample;
Step 38, current training sample is adjusted;Positive sample and negative sample are sequentially taken out from Sample Storehouse, and negative sample is operated according to step 32, using the big sample of weight as current training sample;
Positive sample is sequentially taken out in this step, correctly (i.e. positive is judged to just) retains for positive sample prediction, is specified number until reaching;Randomly select and intercept negative sample at random, negative sample retains for (negative sample is judged into positive sample) of prediction error, is specified number until reaching;
The current strong classifier of generation is trained using same method;
Step 39, using the strong classifier trained as current strong classifier, perform step 37 and step 38 until generation final classification device.
In above process, can also suitably reduce minimum false alarm rate helps to reduce false drop rate.
As shown in figure 3, the embodiment of the present invention provides a kind of trainer, including:
Operating unit 41, for being operated to the sample in Sample Storehouse, obtain current training sample;
Training unit 42, for being trained using current training sample to current class device;
Adjustment unit 43, for being detected using the current class device to the current training sample, and the current training sample is adjusted according to testing result, using the training sample after adjustment as the training sample trained next time;Said process is repeated to obtain being combined into final cascade classifier by N level strong classifiers.
The operating unit 41 specifically includes:
The positive sample specified number and negative sample are selected according to predetermined manner, random intercept operation is carried out to the negative sample according to the size of the positive sample;
It is described to obtain current training sample and include:
Using the positive sample of selected number and the negative sample for completing intercept operation as current training sample.
The training unit 42 is specifically used for:
The weight increase of the negative sample of mistake will be detected, using the big negative sample of weight as current training sample required during training.
Described device also includes reducing unit 44, for reducing false alarm rate.
In summary, beneficial effect:
It can be seen that by method provided in an embodiment of the present invention, the false drop rate of prior art can be reduced, improves detection efficiency.The present invention, which can also suitably reduce minimum false alarm rate, to be helped to reduce false drop rate.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, and all any modification, equivalent and improvement made within the spirit and principles of the invention etc., should be included in the scope of the protection.

Claims (8)

1. a kind of training method, it is characterised in that methods described includes:
Sample in Sample Storehouse is operated, obtains current training sample;
Current class device is trained using current training sample;
The current training sample is detected using the current class device, and according to testing result to institute State current training sample to be adjusted, using the training sample after adjustment as the training sample trained next time;
Said process is repeated to obtain being combined into final cascade classifier by N level strong classifiers.
2. the method as described in claim 1, it is characterised in that the sample in Sample Storehouse is carried out Operation includes:
The positive sample specified number and negative sample are selected according to predetermined manner, according to the size pair of the positive sample The negative sample carries out random intercept operation;
It is described to obtain current training sample and include:
Using the positive sample of selected number and the negative sample for completing intercept operation as current training sample.
3. method as claimed in claim 2, it is characterised in that described to use current training sample to working as Preceding grader be trained including:
The weight increase of the negative sample of mistake will be detected, using the big negative sample of weight as required during training Current training sample.
4. the method as described in claim 1, it is characterised in that methods described also includes reducing false-alarm Rate.
5. a kind of trainer, it is characterised in that described device includes:
Operating unit, for being operated to the sample in Sample Storehouse, obtain current training sample;
Training unit, for being trained using current training sample to current class device;
Adjustment unit, for being detected using the current class device to the current training sample, and The current training sample is adjusted according to testing result, using the training sample after adjustment as next The training sample of secondary training;Said process is repeated to obtain being combined into final cascade point by N level strong classifiers Class device.
6. method as claimed in claim 5, it is characterised in that the operating unit specifically includes:
The positive sample specified number and negative sample are selected according to predetermined manner, according to the size of the positive sample Random intercept operation is carried out to the negative sample;
It is described to obtain current training sample and include:
Using the positive sample of selected number and the negative sample for completing intercept operation as current training sample.
7. method as claimed in claim 6, it is characterised in that the training unit is specifically used for:
The weight increase of the negative sample of mistake will be detected, using the big negative sample of weight as required during training Current training sample.
8. method as claimed in claim 5, it is characterised in that described device also includes reducing unit, For reducing false alarm rate.
CN201610313453.9A 2016-05-12 2016-05-12 A kind of training method and device Pending CN107368842A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610313453.9A CN107368842A (en) 2016-05-12 2016-05-12 A kind of training method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610313453.9A CN107368842A (en) 2016-05-12 2016-05-12 A kind of training method and device

Publications (1)

Publication Number Publication Date
CN107368842A true CN107368842A (en) 2017-11-21

Family

ID=60303630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610313453.9A Pending CN107368842A (en) 2016-05-12 2016-05-12 A kind of training method and device

Country Status (1)

Country Link
CN (1) CN107368842A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740637A (en) * 2018-12-12 2019-05-10 天津津航技术物理研究所 The optimization method of training adaboost cascade classifier
WO2021135933A1 (en) * 2019-12-30 2021-07-08 中兴通讯股份有限公司 Target recognition method and device, storage medium and electronic device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740637A (en) * 2018-12-12 2019-05-10 天津津航技术物理研究所 The optimization method of training adaboost cascade classifier
CN109740637B (en) * 2018-12-12 2023-08-15 天津津航技术物理研究所 Optimization method for training adaboost cascade classifier
WO2021135933A1 (en) * 2019-12-30 2021-07-08 中兴通讯股份有限公司 Target recognition method and device, storage medium and electronic device

Similar Documents

Publication Publication Date Title
CN106290378B (en) Defect classification method and defect inspecting system
JP6285640B2 (en) Integrated automatic and manual defect classification
CN108885699A (en) Character identifying method, device, storage medium and electronic equipment
CN106557747B (en) The method and device of identification insurance single numbers
US20220004818A1 (en) Systems and Methods for Evaluating Perception System Quality
CN110348475A (en) It is a kind of based on spatial alternation to resisting sample Enhancement Method and model
CN106056101A (en) Non-maximum suppression method for face detection
US20200065664A1 (en) System and method of measuring the robustness of a deep neural network
CN104951842A (en) Novel method for predicting oil field output
CN114219306B (en) Method, apparatus, medium for establishing welding quality detection model
CN103793926A (en) Target tracking method based on sample reselecting
CN107368842A (en) A kind of training method and device
CN108764243A (en) A kind of image processing method and device
CN110020430A (en) A kind of fallacious message recognition methods, device, equipment and storage medium
KR101725121B1 (en) Feature vector classification device and method thereof
CN109425622A (en) Image testing device, image checking method and image inspection program
CN115499092A (en) Astronomical radio transient signal searching method, system, device and readable storage medium
WO2023024007A1 (en) Velocity measurement method, system, device and apparatus, velocity field measurement method and system, and storage medium
CN111210018A (en) Method and device for improving robustness of deep neural network model
CN110008987A (en) Test method, device, terminal and the storage medium of classifier robustness
CN107729877B (en) Face detection method and device based on cascade classifier
Wong et al. Using overlap of sky localization probability maps for filtering potentially lensed pairs of gravitational-wave signals
US11551137B1 (en) Machine learning adversarial campaign mitigation on a computing device
CN115577246B (en) Method for detecting vibration resistance of gas cylinder protective cover
CN109657577A (en) A kind of animal detection method based on entropy and motion excursion amount

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20171121