CN106250866A - Neural network-based image feature extraction modeling and image recognition method and device - Google Patents

Neural network-based image feature extraction modeling and image recognition method and device Download PDF

Info

Publication number
CN106250866A
CN106250866A CN201610665948.8A CN201610665948A CN106250866A CN 106250866 A CN106250866 A CN 106250866A CN 201610665948 A CN201610665948 A CN 201610665948A CN 106250866 A CN106250866 A CN 106250866A
Authority
CN
China
Prior art keywords
picture
classification
checking
image
neutral net
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610665948.8A
Other languages
Chinese (zh)
Inventor
张玉兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN201610665948.8A priority Critical patent/CN106250866A/en
Publication of CN106250866A publication Critical patent/CN106250866A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image feature extraction modeling method and device based on a neural network, wherein a first picture, a second picture, a first classification of the first picture and a second classification of the second picture are obtained from a training set of a preset application scene; determining a global loss cost function value according to the first picture, the first classification, the second picture and the second classification; training an image object verification neural network on a training set according to the global loss cost function value and the training parameters; and testing the image object verification neural network through a test set of a preset application scene, determining the test precision according to the test result, and determining a target image object verification feature extraction model according to the test precision and the image object verification neural network. The method and the device can achieve the beneficial effect of improving the picture identification precision when the image characteristic model obtained by modeling is applied to image identification in an image identification application scene for image identification. The invention also provides an image identification method and device.

Description

Image characteristics extraction modeling based on neutral net, image-recognizing method and device
Technical field
The present invention relates to image identification technical field, particularly relate to a kind of image characteristics extraction based on neutral net modeling Method and device and a kind of image-recognizing method and device.
Background technology
Image recognition is that image is processed, analyzes and understands by computer, with identify various different mode target and Technology to picture.Image recognition for face is recognition of face, and it is that a kind of facial feature information based on people carries out body The biological identification technology that part identifies, after being usually image or the video flowing containing face with video camera or camera collection, automatically Detect and track face in the picture, and then the face detected is carried out face recognition, generally also it is called Identification of Images, face Identify.
At present, face recognition algorithms is all based on human face photo and corresponding identity information, uses neutral net to enter Row model training, and final utilization grader carries out recognition of face.To in the training of model in recognition of face neutral net Consider the identity information of face picture, promote for utilizing the accuracy of identification of the recognition of face of this model to need further.
Summary of the invention
Based on this, it is necessary to provide a kind of can improve in image recognition application scenarios accuracy of identification based on nerve net The image characteristics extraction modeling method of network and device, and a kind of application built by this image characteristics extraction based on neutral net The image-recognizing method of the characteristics of image model that mould method and device is set up and device.
A kind of image characteristics extraction modeling method based on neutral net, including:
The first classification of the first picture, second picture, described first picture is obtained from the training set of default application scenarios And the second classification of described second picture;
Described first picture, described first classification, described second picture and described second classification are tested as image object The input of card neutral net, determines overall situation loss cost function value;
Described image object is trained to test in described training set according to described overall situation loss cost function value and training parameter Card neutral net;
By the test set of described default application scenarios, described image object checking neutral net is tested, and according to Test result determines measuring accuracy, verifies that neutral net determines target image pair according to described measuring accuracy and described image object As checking Feature Selection Model.
A kind of image-recognizing method, including:
Obtain picture to be identified, and using described picture to be identified as above-mentioned image characteristics extraction based on neutral net The input of the target image subject checking Feature Selection Model that modeling method determines, determines checking feature to be identified;
Corresponding with the picture in training set for described checking feature to be identified picture checking feature is contrasted, and will be with The nearest picture checking classification belonging to picture corresponding to feature of described checking characteristic distance to be identified is defined as described to be identified The classification of picture.
A kind of image characteristics extraction model building device based on neutral net, including:
Picture classification acquisition module, for obtaining the first picture, second picture, institute from the training set of default application scenarios State the first classification of the first picture and the second classification of described second picture;
Loss cost determines module, for will described first picture, described first classify, described second picture and described the Two classification, as the input of image object checking neutral net, determine overall situation loss cost function value;
Neural metwork training module, for losing cost function value and training parameter in described training set according to the described overall situation Upper training described image object checking neutral net;
Characteristic model determines module, for refreshing to the checking of described image object by the test set of described default application scenarios Test through network, and determine measuring accuracy according to test result, verify according to described measuring accuracy and described image object Neutral net determines that target image subject verifies Feature Selection Model.
A kind of pattern recognition device, including:
Characteristic determination module to be known, is used for obtaining picture to be identified, and using described picture to be identified as above-mentioned based on The input of the target image subject checking Feature Selection Model that the image characteristics extraction model building device of neutral net determines, determines and treats Identify checking feature;
Comparison-of-pair sorting determines module, for being tested by picture corresponding with the picture in training set for described checking feature to be identified Characteristics of syndrome contrasts, and by belonging to picture corresponding for the picture checking feature nearest with described checking characteristic distance to be identified Classification is defined as the classification of described picture to be identified.
Above-mentioned picture feature based on neutral net extracts modeling method and device, due to use the when of training pattern Overall situation loss cost function value, not only relevant to the first picture, second picture, also first with the first picture classify, the second figure Second classification of sheet is relevant.Therefore, the first of image object checking characteristic model and the first picture that modeling obtains classify, second Second classification of picture is relevant.It is thus possible to the image object checking characteristic model reaching modeling to be obtained is applied and is answered default With when scene carries out image recognition, improve the beneficial effect of image recognition precision.
Above-mentioned image-recognizing method and device, due to by above-mentioned image characteristics extraction modeling method based on neutral net Or the target image subject checking Feature Selection Model that device determines determines checking feature to be identified, and by this checking to be identified Feature and the picture in training set verify that feature contrasts, and finally determine the classification of picture to be identified, and therefore, above-mentioned image is known The accuracy of identification of other method and device is high.
Accompanying drawing explanation
Fig. 1 is the flow chart of the image characteristics extraction modeling method based on neutral net of an embodiment;
Fig. 2 is the particular flow sheet of a step of the image characteristics extraction modeling method based on neutral net of Fig. 1;
Fig. 3 is the flow chart of the image characteristics extraction modeling method based on neutral net of another embodiment;
Fig. 4 be an embodiment image characteristics extraction modeling method based on neutral net in carry out face registration process before Exemplary plot;
Fig. 5 is the result figure after the exemplary plot of Fig. 4 carries out face registration process;
Fig. 6 is the particular flow sheet of another step of the image characteristics extraction modeling method based on neutral net of Fig. 1;
Fig. 7 is the flow chart of the image-recognizing method of an embodiment;
Fig. 8 is the structure chart of the image characteristics extraction model building device based on neutral net of an embodiment;
Fig. 9 is the structure chart of the image characteristics extraction model building device based on neutral net of another embodiment;
Figure 10 is the structure chart of the pattern recognition device of an embodiment.
Detailed description of the invention
For the ease of understanding the present invention, below with reference to relevant drawings, the present invention is described more fully.In accompanying drawing Give the preferred embodiment of the present invention.But, the present invention can realize in many different forms, however it is not limited to herein Described embodiment.On the contrary, providing the purpose of these embodiments is to make the understanding to the disclosure more saturating Thorough comprehensively.
Unless otherwise defined, all of technology used herein and scientific terminology and the technical field belonging to the present invention The implication that technical staff is generally understood that is identical.The term used the most in the description of the invention is intended merely to describe tool The purpose of the embodiment of body, it is not intended that in limiting the present invention.Term as used herein " or/and " include one or more phase Arbitrary and all of combination of the Listed Items closed.
As it is shown in figure 1, be the image characteristics extraction modeling method based on neutral net of one embodiment of the invention, bag Include:
S140: obtain from the training set of default application scenarios the first picture, second picture, the first of described first picture Classification and the second classification of described second picture.
Default application scenarios can be that image recognition precision is required higher scene, especially wants recognition of face precision Seek higher scene, such as bank VTM (Virtual Teller Machine, remote teller machine) checking, jeweler's shop VIP (Very Important Person, honored guest) scene such as identification.
All include object to be identified on each pictures, e.g., can be article to be identified or people.Same classification represents Same object, as being same person or same article.
S160: using described first picture, described first classification, described second picture and described second classification as image pair As verifying the input of neutral net, determine overall situation loss cost function value.
By the first picture and second picture, carry according to the image object checking feature in image object checking neutral net Delivery type can determine respectively the first picture, the characteristics of objects of second picture and checking feature, and then according to first classification, second Classification and these characteristics of objects, checking feature determine the overall situation loss cost function value.
Image object checking neutral net is based on the image object identification neutral net of prior art, including with image The image object checking feature extraction determined based on image object identification Feature Selection Model in Object identifying neutral net Model.
Specifically, this image object identification neutral net is trained image object identification neutral net, is to use Prior art carries out the neutral net trained of image recognition.So, trained image object identification god Follow-up training is carried out on the basis of network, and without training of starting from scratch again.Therefore, it can save the training time, quickly Find optimal neural network.Further, trained image object identification neutral net is deep neural network, i.e. image Object identifying deep neural network.
Can be with structural map as banknote validation neutral net according to image object checking Feature Selection Model.Preferably, image Banknote validation neutral net is deep neural network, i.e. image object checking deep neural network.
It should be noted that image object checking Feature Selection Model is being obtained by image object identification Feature Selection Model On the basis of image object identification feature, obtain image object checking feature according to image object identification feature.Specifically, right Image object identification feature carries out two norm normalizeds, obtains image object checking feature.Two norm normalized tools Body is, using squared for each characteristic element of image object identification feature and the result obtained that extracts square root the most again as image Each characteristic element denominator of banknote validation feature;Again by image object identification feature each with verify whether as with a pair As the eigenvalue of relevant characteristic element is as the molecule of a characteristic element of image object checking feature.
S170: train described image pair in described training set according to described overall situation loss cost function value and training parameter As checking neutral net.
In one embodiment, image pair can be may determine that according to overall situation loss cost function value and chain type Rule for derivation Grad as each parameter in the image object checking Feature Selection Model of checking neutral net;Damage according to the described overall situation Lose cost function value and training parameter uses stochastic gradient descent method to train described image object checking god in described training set Through network.
Training parameter includes characteristic distance threshold value and learning rate.In a preferred embodiment, characteristic distance threshold value Value can be 0.2 or 0.25 with default setting;The value of learning rate can be with default setting for 0.0001.
S180: described image object checking neutral net is tested by the test set of described default application scenarios, And determine measuring accuracy according to test result, verify that neutral net determines target according to described measuring accuracy and described image object Image object checking Feature Selection Model.
As long as the capacity of test set is sufficiently large, the training to image object checking neutral net can be sustained. In the present embodiment, every time after training Preset Time, by the test set of described default application scenarios to described image object Checking neutral net is tested, and determines measuring accuracy according to test result.Can be to use existing mode according to test result Determine measuring accuracy.
When measuring accuracy arrives and presets precision, not continuing to training, image object checking neutral net now is mesh Logo image banknote validation neutral net.Target image subject checking spy is may determine that by target image subject checking neutral net Levy extraction model.Wherein, default precision is to pre-set the required precision that test needs to reach.
In one preferably embodiment, the mode of cross validation is used to verify.Test set is not for have with training set The set of the image occured simultaneously, it is preferable that image is face picture.
In a specific embodiment, test set production method is: by N number of classification except being used for making training set K classification, the human face photo of remaining N-K classification is used for making test set.Test set is tested by the face picture randomly drawed Card is to composition, and decimation rule is as follows:
The a of the n-th classification opens face picture, and the b of the n-th classification opens face picture (positive sample to)
...
The c of i-th classification opens face picture, and the d of jth classification opens face picture (negative sample to)
...
According to the rule of international standard face verification test set, positive and negative samples is right to respectively taking 3000 herein, and totally 6000 is right. Test order is: two photos of positive sample centering are judged into same person, then correct judgment, i.e. xi=1;By negative sample pair In two photos to judge into be not same person, then correct judgment, i.e. xi=1;Other then misjudgment, i.e. xi=0.Then survey Examination definition of accuracy is:
Σ i = 1 6000 x i 6000 × 100 % .
Wherein in an embodiment, do not pre-set default measuring accuracy, preset precision change procedure be elder generation by Gradually promote, larger fluctuation will be produced after arriving certain precision;Here this precision is designated as maximum stable precision.Therefore, When measuring accuracy the most stably promotes, when i.e. arriving maximum stable precision, do not continue to training image banknote validation nerve net Network, present image banknote validation neutral net is the image object checking neutral net of optimum.The image object taking optimum is tested An image object checking Feature Selection Model in card neutral net, and be only input with a pictures, test with image object Characteristics of syndrome is the output that image object identification feature is omitted in output, thus obtains final target image subject checking feature Extraction model.
Above-mentioned picture feature based on neutral net extracts modeling method, due to the overall situation damage used the when of training pattern Lose cost function value, not only relevant to the first picture, second picture, also first with the first picture classify, the of second picture Two classification are relevant.Therefore, the first of image object checking characteristic model and the first picture that modeling obtains classify, second picture Second classification is relevant.It is thus possible to the image object checking characteristic model reaching to obtain modeling is applied at default application scenarios In when carrying out image recognition, improve the beneficial effect of image recognition precision.
Please continue to refer to Fig. 1, wherein in an embodiment, after step S140, before step S160, also include step Rapid:
Obtain image object identification neutral net, and determine that image object is tested according to described image object identification neutral net Card neutral net.
Obtain the image object identification neutral net carrying out image object identification of prior art, and according to described image pair As identifying that neutral net determines that image object verifies neutral net.
It is to be appreciated that during image is the embodiment of facial image wherein, image object identification neutral net is Face identification neutral net, is preferably, face identification deep neural network;Image object checking neutral net is behaved Face authentication neutral net, is preferably, face authentication deep neural network.Wherein, face identification neutral net Relation and image object identification neutral net and the pass of image object checking neutral net with face authentication neutral net System is consistent, and and therefore not to repeat here.
Referring to Fig. 2, wherein in an embodiment, described image object checking neutral net includes knowing with image object The image object checking Feature Selection Model determined based on the image object identification Feature Selection Model of other neutral net.
Described using described first picture, described first classification, described second picture and described second classification as described figure As the input of banknote validation neutral net, determine the step of overall situation loss cost function value, i.e. S160, including:
S261: described first picture is classified as the mould of described image object checking Feature Selection Model with described first Type inputs, and determines the first characteristics of objects and the first checking feature, is classified as described figure with described second by described second picture As another mode input of banknote validation Feature Selection Model, determine the second characteristics of objects and the second checking feature;Or, institute State image object checking Feature Selection Model and include identical two, described first picture is classified as wherein with described first The mode input of one described image object checking Feature Selection Model, determines the first characteristics of objects and the first checking feature, will Described second picture and described second classification are as the mode input of image object checking Feature Selection Model another described, really Fixed second characteristics of objects and the second checking feature;.
Specifically, described first picture can be classified as mode input, a described second picture with described first With described second classification as another mode input, at twice successively as described in described image object checking neutral net The mode input of image object checking Feature Selection Model, determines the first characteristics of objects and the first checking feature, and second is right As feature and the second checking feature.Can also be that described image object checking Feature Selection Model includes identical two;By institute State the first picture with described first classification as the mode input of a described image object checking Feature Selection Model, by described Second picture and described second classification are as the mode input of image object checking Feature Selection Model, two figures another described As banknote validation Feature Selection Model executed in parallel, determine the first characteristics of objects and the first checking feature, Yi Ji the most respectively Two characteristics of objects and the second checking feature.
S263: determine the first object information loss function value according to described first characteristics of objects and described first classification.
Loss letter can be determined according to a characteristics of objects and a classification in the way of commonly using in the existing neutral net of employing Numerical value.Specifically, determine the classification information obtained according to the first characteristics of objects, according to the classification information obtained and acquisition Whether the first classification belongs to same category, determines recognition result, and then is reacted in loss function value.
S265: determine the second object information loss function value according to described second characteristics of objects and described second classification.
Second object information loss function value is consistent with the determination of the first object information loss function value, therefore at this not Repeat.
S267: according to described first classification, described second classification, described first checking feature and described second checking feature Determine checking loss function value.
Specifically, the formula of checking loss function is:
VerifyLoss=yd+ (1-y) max (α-d, 0)
Wherein,α is the characteristic distance threshold in training parameter Value.VerifyLoss represents checking loss function value;Y represents and belongs to same classification;N1 represents the first classification, and N2 represents second point Class;D is characterized distance;Represent the first checking feature,Represent the second checking feature;Represent two norm computings.
S269: according to described first object information loss function value, described second object information loss function value and described Checking loss function value determines overall situation loss cost function value.
In the present embodiment, overall situation loss cost function value is about the first object information loss function value, the second object Information loss functional value and the linear function value of checking loss function.Specifically, the formula of overall situation loss cost function is:
Loss=Soft max Loss_1+Soft max Loss_2+VerifyLoss
Wherein, Loss is overall situation loss cost function value, and SoftmaxLoss_1 is the first object information loss function value; SoftmaxLoss_2 is the second object information loss function value;VerifyLoss is checking loss function value.
Due to during training pattern, overall situation loss cost function value not only with the first object information loss function value And second object information loss function value be correlated with, also to checking loss function value relevant, therefore, it can improve further and will model When the image object checking characteristic model obtained is applied in default application scenarios, the accuracy of identification of image.
Referring to Fig. 3, wherein in an embodiment, step S340~S380 are corresponding in turn to as step S140~S180. Described image is face picture;Described obtain from the training set of default application scenarios the first picture, second picture, described first Before the step (i.e. step S340) of the first classification of picture and the second classification of described second picture, also include:
S310: gather video pictures in described default application scenarios, and described video pictures is carried out Face datection obtain To face picture.
Use photographic head to gather video pictures in default application scenarios, and leave meter in by network transmission and data wire In calculation machine.By existing manner the video pictures collected is carried out Face datection, face picture is extracted and is stored in meter In calculation machine hard disk.
S320: obtain the classification information that described face picture is classified, according to described classification information to each described people Face picture is classified, and sorted each described face picture carries out face registration process, forms training set.
Manually classifying the face picture detected and extract, therefore, computer obtains the classification letter being manually entered Breath, and according to classification information classification.The human face photo belonging to same category is put together and is marked by classification information.
Owing to the face angle in face picture and face location are inconsistent, in order to ensure to extract stable feature And obtain preferable recognition of face effect, need by existing manner face picture to be carried out key point alignment operation, to carry out Face registration process, removes the impact that recognition of face is brought by face angle.Wherein, key point includes eyes, nose and the corners of the mouth Deng position.It is illustrated in figure 4 a face picture collected, i.e. carries out the exemplary plot before face registration process, alignment After face picture as shown in Figure 5.
Refer to Fig. 6, wherein in an embodiment, described according to described loss cost function value and training parameter in institute State train in training set described image object checking neutral net step, i.e. step S170, including:
S671: obtain initial training parameter, exists according to described overall situation loss cost function value and described initial training parameter Described image object is trained to verify neutral net in described training set.
S673: update training parameter, exists according to the described training parameter after described overall situation loss cost function value and renewal Described image object is trained to verify neutral net in described training set.
So, the training parameter constantly adjusting image object checking neutral net is trained, and determines and optimally trains ginseng Number.By a large amount of debugging and test, find for method described herein, characteristic distance threshold alpha=0.2 and learning rate lr= When 0.001, best arithmetic accuracy can be obtained and promote.
As it is shown in fig. 7, the present invention also provides for a kind of applying above-mentioned image characteristics extraction modeling method based on neutral net Image-recognizing method, including:
S740: obtain picture to be identified, and described picture to be identified is special as above-mentioned image based on neutral net Levy the input extracting the target image subject checking Feature Selection Model that modeling method determines, determine checking feature to be identified.
Target image subject checking Feature Selection Model is to be modeled by above-mentioned image characteristics extraction based on neutral net The target image subject checking Feature Selection Model that method is set up.
Specifically, by camera collection picture to be identified, and by this picture transfer to be identified to computer;Computer obtains Taking this picture to be identified, the target image subject checking Feature Selection Model input of this picture to be identified set up is transported Calculate, may thereby determine that the picture feature of picture to be identified, feature the most to be identified.
S760: picture checking feature corresponding with the picture in training set for described checking feature to be identified is contrasted, And the classification belonging to picture corresponding for the picture checking feature nearest with described checking characteristic distance to be identified is defined as described The classification of picture to be identified.
In the present embodiment, all pictures in training set are determined beforehand through target image subject checking Feature Selection Model Picture checking feature.After determining checking feature to be identified, calculate checking feature to be identified and each picture in training set Picture checking feature distance;Dividing belonging to the picture that the picture checking feature nearest with checking characteristic distance to be identified is corresponding Class is the classification of picture to be identified.
Above-mentioned image-recognizing method, due to determine by above-mentioned image characteristics extraction modeling method based on neutral net Target image subject checking Feature Selection Model determines checking feature to be identified, and by this checking feature to be identified and training set In picture checking feature contrast, finally determine the classification of picture to be identified, therefore, the identification of above-mentioned image-recognizing method Precision is high.
Please continue to refer to Fig. 7, wherein in an embodiment, also include:
S720: obtain target image subject checking Feature Selection Model.
The present invention also provides for the virtual bench that a kind of and based on neutral net image characteristics extraction modeling method is corresponding.As Shown in Fig. 8, the image characteristics extraction model building device based on neutral net of an embodiment, including:
Picture classification acquisition module 840, for obtaining the first picture, the second figure from the training set of default application scenarios Sheet, the first classification of described first picture and the second classification of described second picture;
Loss cost determines module 860, for by described first picture, described first classification, described second picture and institute State second classification input as image object checking neutral net, determine overall situation loss cost function value;
Neural metwork training module 870, for losing cost function value and training parameter in described instruction according to the described overall situation Practice the upper training of collection described image object checking neutral net;
Characteristic model determines module 880, for being tested described image object by the test set of described default application scenarios Card neutral net is tested, and determines measuring accuracy according to test result, according to described measuring accuracy and described image object Checking neutral net determines that target image subject verifies Feature Selection Model.
Above-mentioned image characteristics extraction model building device based on neutral net, due to the overall situation damage used the when of training pattern Lose cost function value, not only relevant to the first picture, second picture, also first with the first picture classify, the of second picture Two classification are relevant.Therefore, the first of image object checking characteristic model and the first picture that modeling obtains classify, second picture Second classification is relevant.It is thus possible to the image object checking characteristic model reaching to obtain modeling is applied at default application scenarios In when carrying out image recognition, improve the beneficial effect of image recognition precision.
Refer to Fig. 9, wherein in an embodiment, also include:
Network Capture determines module 950, is used for obtaining image object identification neutral net, and knows according to described image object Other neutral net determines that image object verifies neutral net.
Please continue to refer to Fig. 9, wherein in an embodiment, described image object checking neutral net includes with image pair As the image object checking Feature Selection Model determined based on the image object identification Feature Selection Model of identification neutral net. Described loss cost determines module 960, including:
Picture feature determines unit 961 (not shown), determines unit for picture feature, for will described first picture and Described first classification, as the mode input of described image object checking Feature Selection Model, determines the first characteristics of objects and first Checking feature, verifies another of Feature Selection Model using described second picture and described second classification as described image object Mode input, determines the second characteristics of objects and the second checking feature;Or, described image object checking Feature Selection Model includes Identical two;Picture feature determines unit 961, for being classified as one of them institute with described first by described first picture State the mode input of image object checking Feature Selection Model, determine the first characteristics of objects and the first checking feature, by described the Two pictures and described second classification, as the mode input of image object checking Feature Selection Model another described, determine second Characteristics of objects and the second checking feature;
First-loss determines unit (not shown), for determining the according to described first characteristics of objects and described first classification One object information loss function value;
Second loss determines unit (not shown), for determining the according to described second characteristics of objects and described second classification Two object information loss function values;
Checking loss determines unit (not shown), for according to described first classification, described second classification, described first test Characteristics of syndrome and described second checking feature determine checking loss function value;
The overall situation loss determine unit (not shown), for according to described first object information loss function value, described second Object information loss function value and described checking loss function value determine overall situation loss cost function value.
Please continue to refer to Fig. 9, wherein in an embodiment, described image is face picture;Described device, also includes:
Picture collection detection module 910, for gathering video pictures, and to described video in described default application scenarios Picture carries out Face datection and obtains face picture;
Picture classification alignment module 920, for obtaining the classification information classifying described face picture, according to each institute State classification information described face picture is classified, and sorted each described face picture carried out face registration process, Form training set.
Wherein in an embodiment, described neural metwork training module 970, including:
Initial parameter training unit 971, is used for obtaining initial training parameter, according to described the overall situation loss cost function value and Described initial training parameter trains described image object to verify neutral net in described training set;
Undated parameter training unit 973, is used for updating training parameter, according to described overall situation loss cost function value and renewal After described training parameter train in described training set described image object verify neutral net.
The present invention also provides for a kind of virtual bench corresponding with image-recognizing method.As shown in Figure 10, embodiment Pattern recognition device, including:
Characteristic determination module 1040 to be known, is used for obtaining picture to be identified, and using described picture to be identified as above-mentioned The input of the target image subject checking Feature Selection Model that image characteristics extraction model building device based on neutral net determines, really Fixed checking feature to be identified;
Comparison-of-pair sorting determines module 1060, for by figure corresponding with the picture in training set for described checking feature to be identified Sheet checking feature contrasts, and by picture institute corresponding for the picture checking feature nearest with described checking characteristic distance to be identified The classification belonged to is defined as the classification of described picture to be identified.
Above-mentioned pattern recognition device, due to according to true by above-mentioned image characteristics extraction model building device based on neutral net Fixed target image subject checking Feature Selection Model determines checking feature to be identified, and by this checking feature to be identified and instruction Practice the picture checking feature concentrated to contrast, finally determine the classification of picture to be identified, therefore, above-mentioned pattern recognition device Accuracy of identification is high.
Wherein in an embodiment, also include:
Characteristic model acquisition module 1020, is used for obtaining target image subject checking Feature Selection Model.
Above example only have expressed the several embodiments of the present invention, and it describes more concrete and detailed, but can not Therefore the restriction to the scope of the claims of the present invention it is interpreted as.It should be pointed out that, for the person of ordinary skill of the art, Without departing from the inventive concept of the premise, it is also possible to make multiple deformation and improvement, these broadly fall into the protection model of the present invention Enclose.Therefore, the protection domain of patent of the present invention should be as the criterion with claims.

Claims (10)

1. an image characteristics extraction modeling method based on neutral net, it is characterised in that including:
The first classification and institute of the first picture, second picture, described first picture is obtained from the training set of default application scenarios State the second classification of second picture;
Using described first picture, described first classification, described second picture and described second classification as image object checking god Through the input of network, determine overall situation loss cost function value;
In described training set, described image object checking god is trained according to described overall situation loss cost function value and training parameter Through network;
By the test set of described default application scenarios, described image object checking neutral net is tested, and according to test Result determines measuring accuracy, verifies that neutral net determines that target image subject is tested according to described measuring accuracy and described image object Characteristics of syndrome extraction model.
Image characteristics extraction modeling method based on neutral net the most according to claim 1, it is characterised in that described figure As banknote validation neutral net includes based on the image object identification Feature Selection Model of image object identification neutral net The image object checking Feature Selection Model determined;
Described using described first picture, described first classification, described second picture and described second classification as described image pair As verifying the input of neutral net, determine that the step of overall situation loss cost function value includes:
Described first picture is classified as the mode input of described image object checking Feature Selection Model with described first, really Fixed first characteristics of objects and the first checking feature, verify described second picture as described image object with described second classification Another mode input of Feature Selection Model, determines the second characteristics of objects and the second checking feature;Or, described image object Checking Feature Selection Model includes identical two, is classified as figure one of them described with described first by described first picture As the mode input of banknote validation Feature Selection Model, determine the first characteristics of objects and the first checking feature, by described second figure Sheet and described second classification, as the mode input of image object checking Feature Selection Model another described, determine the second object Feature and the second checking feature;
The first object information loss function value is determined according to described first characteristics of objects and described first classification;
The second object information loss function value is determined according to described second characteristics of objects and described second classification;
Checking is determined according to described first classification, described second classification, described first checking feature and described second checking feature Loss function value;
Letter is lost according to described first object information loss function value, described second object information loss function value and described checking Numerical value determines overall situation loss cost function value.
Image characteristics extraction modeling method based on neutral net the most according to claim 1, it is characterised in that described figure Picture is face picture;
Described the first classification obtaining the first picture, second picture, described first picture from the training set of default application scenarios And before the step of the second classification of described second picture, also include:
In described default application scenarios, gather video pictures, and described video pictures is carried out Face datection obtain face figure Sheet;
Obtain the classification information that described face picture is classified, according to described classification information, each described face picture is carried out Classification, and sorted each described face picture is carried out face registration process, form training set.
Image characteristics extraction modeling method based on neutral net the most according to claim 1, it is characterised in that described Described image object is trained to verify neutral net in described training set according to described overall situation loss cost function value and training parameter Step, including:
Obtain initial training parameter, lose cost function value and described initial training parameter in described training set according to the described overall situation Upper training described image object checking neutral net;
Update training parameter, according to the described training parameter after described overall situation loss cost function value and renewal in described training set Upper training described image object checking neutral net.
5. an image-recognizing method, it is characterised in that including:
Obtain picture to be identified, and using described picture to be identified as described in claim 1-4 any one based on nerve net The input of the target image subject checking Feature Selection Model that the image characteristics extraction modeling method of network determines, determines to be identified testing Characteristics of syndrome;
Picture checking feature corresponding with the picture in training set for described checking feature to be identified is contrasted, and will be with described The nearest picture checking classification belonging to picture corresponding to feature of checking characteristic distance to be identified is defined as described picture to be identified Classification.
6. an image characteristics extraction model building device based on neutral net, it is characterised in that including:
Picture classification acquisition module, for obtain from the training set of default application scenarios the first picture, second picture, described the First classification of one picture and the second classification of described second picture;
Loss cost determines module, for by described first picture, described first classification, described second picture and described second point Class, as the input of image object checking neutral net, determines overall situation loss cost function value;
Neural metwork training module, for instructing in described training set according to described overall situation loss cost function value and training parameter Practice described image object checking neutral net;
Characteristic model determines module, for described image object being verified nerve net by the test set of described default application scenarios Network is tested, and determines measuring accuracy according to test result, verifies nerve according to described measuring accuracy and described image object Network determines that target image subject verifies Feature Selection Model.
Image characteristics extraction model building device based on neutral net the most according to claim 6, it is characterised in that described figure As banknote validation neutral net includes based on the image object identification Feature Selection Model of image object identification neutral net The image object checking Feature Selection Model determined;
Described loss cost determines module, including:
Picture feature determines unit, for described first picture is verified feature with described first classification as described image object The mode input of extraction model, determine the first characteristics of objects with first checking feature, by described second picture with described second point Class, as another mode input of described image object checking Feature Selection Model, determines the second characteristics of objects and the second checking Feature;Or, described image object checking Feature Selection Model includes identical two;Picture feature determines unit, and being used for will The mode input of Feature Selection Model is verified in described first picture and described first classification as image object one of them described, Determine the first characteristics of objects and the first checking feature, described second picture is classified as image another described with described second The mode input of banknote validation Feature Selection Model, determines the second characteristics of objects and the second checking feature;
First-loss determines unit, for determining that the first object information is damaged according to described first characteristics of objects and described first classification Lose functional value;
Second loss determines unit, for determining that the second object information is damaged according to described second characteristics of objects and described second classification Lose functional value;
Checking loss determines unit, for according to described first classification, described second classification, described first checking feature and described Second checking feature determines checking loss function value;
Overall situation loss determines unit, for according to described first object information loss function value, described second object information loss Functional value and described checking loss function value determine overall situation loss cost function value.
Image characteristics extraction model building device based on neutral net the most according to claim 6, it is characterised in that described figure Picture is face picture;Described device, also includes:
Picture collection detection module, for gathering video pictures in described default application scenarios, and enters described video pictures Row Face datection obtains face picture;
Picture classification alignment module, for obtaining the classification information classifying described face picture, according to each described classification Described face picture is classified by information, and sorted each described face picture carries out face registration process, forms instruction Practice collection.
Image characteristics extraction model building device based on neutral net the most according to claim 6, it is characterised in that described god Through network training module, including:
Initial parameter training unit, is used for obtaining initial training parameter, according to described the overall situation loss cost function value and described at the beginning of Beginning training parameter trains described image object to verify neutral net in described training set;
Undated parameter training unit, is used for updating training parameter, according to the institute after described overall situation loss cost function value and renewal Stating training parameter trains described image object to verify neutral net in described training set.
10. a pattern recognition device, it is characterised in that including:
Characteristic determination module to be known, is used for obtaining picture to be identified, and using any as claim 6-9 for described picture to be identified The target image subject checking feature extraction mould that one described image characteristics extraction model building device based on neutral net determines The input of type, determines checking feature to be identified;
Comparison-of-pair sorting determines module, for by picture checking spy corresponding with the picture in training set for described checking feature to be identified Levy and contrast, and by the classification belonging to picture corresponding for the picture checking feature nearest with described checking characteristic distance to be identified It is defined as the classification of described picture to be identified.
CN201610665948.8A 2016-08-12 2016-08-12 Neural network-based image feature extraction modeling and image recognition method and device Pending CN106250866A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610665948.8A CN106250866A (en) 2016-08-12 2016-08-12 Neural network-based image feature extraction modeling and image recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610665948.8A CN106250866A (en) 2016-08-12 2016-08-12 Neural network-based image feature extraction modeling and image recognition method and device

Publications (1)

Publication Number Publication Date
CN106250866A true CN106250866A (en) 2016-12-21

Family

ID=57592678

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610665948.8A Pending CN106250866A (en) 2016-08-12 2016-08-12 Neural network-based image feature extraction modeling and image recognition method and device

Country Status (1)

Country Link
CN (1) CN106250866A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778785A (en) * 2016-12-23 2017-05-31 东软集团股份有限公司 Build the method for image characteristics extraction model and method, the device of image recognition
CN108416295A (en) * 2018-03-08 2018-08-17 天津师范大学 A kind of recognition methods again of the pedestrian based on locally embedding depth characteristic
CN108509961A (en) * 2017-02-27 2018-09-07 北京旷视科技有限公司 Image processing method and device
CN108629377A (en) * 2018-05-10 2018-10-09 北京达佳互联信息技术有限公司 A kind of the loss value-acquiring method and device of disaggregated model
CN109034040A (en) * 2018-07-19 2018-12-18 北京影谱科技股份有限公司 A kind of character recognition method based on cast, device, equipment and medium
WO2019072057A1 (en) * 2017-10-13 2019-04-18 华为技术有限公司 Image signal processing method, apparatus and device
WO2019114523A1 (en) * 2017-12-12 2019-06-20 腾讯科技(深圳)有限公司 Classification training method, server and storage medium
CN110569737A (en) * 2019-08-15 2019-12-13 深圳华北工控软件技术有限公司 Face recognition deep learning method and face recognition acceleration camera
CN111191655A (en) * 2018-11-14 2020-05-22 佳能株式会社 Object identification method and device
CN111341459A (en) * 2020-02-28 2020-06-26 上海交通大学医学院附属上海儿童医学中心 Training method of classified deep neural network model and genetic disease detection method
CN111950728A (en) * 2020-08-17 2020-11-17 珠海格力电器股份有限公司 Image feature extraction model construction method, image retrieval method and storage medium
CN112689763A (en) * 2018-09-20 2021-04-20 美国西门子医学诊断股份有限公司 Hypothesis and verification network and method for sample classification
CN113033424A (en) * 2021-03-29 2021-06-25 广东众聚人工智能科技有限公司 Multi-branch video anomaly detection method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824054A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded depth neural network-based face attribute recognition method
CN104866810A (en) * 2015-04-10 2015-08-26 北京工业大学 Face recognition method of deep convolutional neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824054A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded depth neural network-based face attribute recognition method
CN104866810A (en) * 2015-04-10 2015-08-26 北京工业大学 Face recognition method of deep convolutional neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FLORIAN SCHROFF: "FaceNet: A unified embedding for face recognition and clustering", 《THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
YI SUN 等: "Deep Learning Face Representation by Joint Identification-Verification", 《HTTP://ARXIV.ORG/PDF/1406.4773》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778785A (en) * 2016-12-23 2017-05-31 东软集团股份有限公司 Build the method for image characteristics extraction model and method, the device of image recognition
CN106778785B (en) * 2016-12-23 2019-09-17 东软集团股份有限公司 Construct the method for image Feature Selection Model and the method, apparatus of image recognition
CN108509961A (en) * 2017-02-27 2018-09-07 北京旷视科技有限公司 Image processing method and device
WO2019072057A1 (en) * 2017-10-13 2019-04-18 华为技术有限公司 Image signal processing method, apparatus and device
CN109688351A (en) * 2017-10-13 2019-04-26 华为技术有限公司 A kind of image-signal processing method, device and equipment
US11430209B2 (en) 2017-10-13 2022-08-30 Huawei Technologies Co., Ltd. Image signal processing method, apparatus, and device
US11017220B2 (en) 2017-12-12 2021-05-25 Tencent Technology (Shenzhen) Company Limited Classification model training method, server, and storage medium
WO2019114523A1 (en) * 2017-12-12 2019-06-20 腾讯科技(深圳)有限公司 Classification training method, server and storage medium
CN108416295A (en) * 2018-03-08 2018-08-17 天津师范大学 A kind of recognition methods again of the pedestrian based on locally embedding depth characteristic
CN108416295B (en) * 2018-03-08 2021-10-15 天津师范大学 Pedestrian re-identification method based on local embedding depth features
CN108629377A (en) * 2018-05-10 2018-10-09 北京达佳互联信息技术有限公司 A kind of the loss value-acquiring method and device of disaggregated model
CN109034040A (en) * 2018-07-19 2018-12-18 北京影谱科技股份有限公司 A kind of character recognition method based on cast, device, equipment and medium
CN112689763A (en) * 2018-09-20 2021-04-20 美国西门子医学诊断股份有限公司 Hypothesis and verification network and method for sample classification
CN111191655A (en) * 2018-11-14 2020-05-22 佳能株式会社 Object identification method and device
CN111191655B (en) * 2018-11-14 2024-04-16 佳能株式会社 Object identification method and device
CN110569737A (en) * 2019-08-15 2019-12-13 深圳华北工控软件技术有限公司 Face recognition deep learning method and face recognition acceleration camera
CN111341459A (en) * 2020-02-28 2020-06-26 上海交通大学医学院附属上海儿童医学中心 Training method of classified deep neural network model and genetic disease detection method
CN111950728A (en) * 2020-08-17 2020-11-17 珠海格力电器股份有限公司 Image feature extraction model construction method, image retrieval method and storage medium
CN113033424A (en) * 2021-03-29 2021-06-25 广东众聚人工智能科技有限公司 Multi-branch video anomaly detection method and system

Similar Documents

Publication Publication Date Title
CN106250866A (en) Neural network-based image feature extraction modeling and image recognition method and device
CN106529571B (en) Multilayer image feature extraction modeling and image recognition method and device based on neural network
CN105975959A (en) Face feature extraction modeling and face recognition method and device based on neural network
CN107358223B (en) Face detection and face alignment method based on yolo
US10539613B2 (en) Analog circuit fault diagnosis method using single testable node
CN105468760B (en) The method and apparatus that face picture is labeled
CN103093215B (en) Human-eye positioning method and device
CN106157688B (en) Parking space detection method and system based on deep learning and big data
CN105303193B (en) A kind of passenger number statistical system based on single-frame images processing
CN103870811B (en) A kind of front face Quick method for video monitoring
CN107219924B (en) A kind of aerial gesture identification method based on inertial sensor
CN106372666B (en) A kind of target identification method and device
CN107103281A (en) Face identification method based on aggregation Damage degree metric learning
CN105303179A (en) Fingerprint identification method and fingerprint identification device
CN102163281B (en) Real-time human body detection method based on AdaBoost frame and colour of head
CN104751110A (en) Bio-assay detection method and device
CN106295574A (en) Face feature extraction modeling and face recognition method and device based on neural network
CN103971106B (en) Various visual angles facial image gender identification method and device
CN103324938A (en) Method for training attitude classifier and object classifier and method and device for detecting objects
CN108960145A (en) Facial image detection method, device, storage medium and electronic equipment
CN108564040B (en) Fingerprint activity detection method based on deep convolution characteristics
CN103390151B (en) Method for detecting human face and device
CN1952953A (en) Posture recognition method of human's face based on limited Boltzmann machine neural network
CN103310235B (en) A kind of steganalysis method based on parameter identification and estimation
CN117292148B (en) Tunnel surrounding rock level assessment method based on directional drilling and test data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20161221