CN107463906A - The method and device of Face datection - Google Patents

The method and device of Face datection Download PDF

Info

Publication number
CN107463906A
CN107463906A CN201710672535.7A CN201710672535A CN107463906A CN 107463906 A CN107463906 A CN 107463906A CN 201710672535 A CN201710672535 A CN 201710672535A CN 107463906 A CN107463906 A CN 107463906A
Authority
CN
China
Prior art keywords
convolution
image
recognition
face
final external
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710672535.7A
Other languages
Chinese (zh)
Inventor
孙旭东
吴鹏程
许主洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Deep Map (xiamen) Technology Co Ltd
Original Assignee
Deep Map (xiamen) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deep Map (xiamen) Technology Co Ltd filed Critical Deep Map (xiamen) Technology Co Ltd
Priority to CN201710672535.7A priority Critical patent/CN107463906A/en
Publication of CN107463906A publication Critical patent/CN107463906A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of method and device of recognition of face.Wherein method comprises the following steps the target image for obtaining and carrying out recognition of face;Multilayer process of convolution is carried out to target image;Merge the pond result of the convolution Feature Mapping of predetermined number in multilayer process of convolution, obtain final external treatment image;Final external treatment image is classified;Bbox recurrence processing is carried out to final external treatment image;Result is returned according to classification result and bbox, the face location of target image is calculated.Multilayer process of convolution is carried out to initial target image, and the characteristics of image after different levels process of convolution is merged, obtains final external treatment image.The image detail information after middle process of convolution is contained in its last final external treatment image generated, forms multi-scale feature fusion, the description for former target image is more accurate, so as to which recognition of face, position determination are also just more accurate.

Description

The method and device of Face datection
Technical field
The present invention relates to image identification technical field, more particularly to a kind of method and device of recognition of face.
Background technology
In facial image detection, Faster RCNN deep learning framework can be used, it is to be used to general object detect State-of-the-art deep learning scheme.This scheme is substantially made up of two parts:(1) region motion network (RPN), use The motion list in the region (or being area-of-interest (RoIs)) of object may be included in generation;(2) it is used for image district Domain is categorized as object (and background) and improves the Fast RCNN networks on the border in these regions.
Common facial image detection includes several steps:Obtaining image ,-- the multistage process of convolution of progress --- is according to last Convolution processing result obtains the 3rd external image using preset algorithm, and face location finally is calculated according to the 3rd external image Recognition result.RoI ponds perform in final feature figure layer in this method, i.e., the process handled according to the 3rd external image Middle generation, to generate the feature in the region, many local detail information loss that can include shallow-layer convolutional layer.Influence final people The accuracy of face identification.
The content of the invention
Based on this, it is necessary to lose seriously for average information in above-mentioned face recognition process, influence final recognition of face A kind of the problem of accuracy, there is provided method and device for the recognition of face that face can be accurately identified.
To realize a kind of method of recognition of face of the object of the invention offer, including
The target image of recognition of face will be carried out by obtaining;
Multilayer process of convolution is carried out to the target image;
Merge the pond result of the convolution Feature Mapping of predetermined number in the multilayer process of convolution, obtain at final outside Manage image;
The final external treatment image is classified;
Bbox recurrence processing is carried out to the final external treatment image;
Result is returned according to classification result and bbox, the face location of the target image is calculated.
It is described to merge predetermined number in the multilayer process of convolution in the method for the recognition of face of wherein one embodiment Convolution Feature Mapping pond result, obtaining final external treatment image includes:
Pond processing is carried out respectively to the convolution results of predetermined number;
The normalized of L2 norms is carried out to each pond result;
Convolution results after normalized are merged;
Proportion adjustment and network channel Matching and modification are carried out to the convolution results after merging treatment, obtained described final External treatment image.
In the method for the recognition of face of wherein one embodiment, the convolution results to after merging treatment are carried out Ratio adjusts and network channel Matching and modification, obtains the final external treatment image, including:
The convolution results after being adjusted using 1 × 1 convolution comparative example are handled, and obtain port number and the target figure As the final external treatment image of network channel number identical.
In the method for the recognition of face of wherein one embodiment, being classified to the final external treatment image and It is further comprising the steps of before bbox recurrence processing:
The final external treatment image is subjected to full connection processing by two full articulamentums.
In the method for the recognition of face of wherein one embodiment, the step carries out multilayer convolution to the target image Processing;Step merges the pond result of the convolution Feature Mapping of predetermined number in the multilayer process of convolution, obtains final outside Image is handled, and step is classified by the final external treatment image and position refine handles to obtain the target image Face location, included in the step of in human face recognition model, methods described also includes human face recognition model generation training;
The step of human face recognition model generation training, includes:
Initial model is trained in the first default pictures, obtains initial training model;
The obtained initial training model is finely adjusted according to the second default pictures, obtains human face recognition model;
The quantity that described second default pictures include picture is less than the described first default pictures, and described second is default Picture standard information mark in pictures is more accurate.
In the method for the recognition of face of wherein one embodiment, in the human face recognition model generates the step of training The training of hard negative mining negative samples is added, and/or is trained using multiple dimensioned training method.
A kind of device of recognition of face, including:
Image receiver module, the target image of recognition of face is carried out for obtaining;
Process of convolution module, for carrying out multilayer process of convolution to the target image;
Merging treatment module, for merging the Chi Huajie of the convolution Feature Mapping of predetermined number in the multilayer process of convolution Fruit, obtain final external treatment image;
Classification processing module, for classifying to the final external image;
Processing module is returned, for carrying out bbox recurrence processing to the final external treatment image;
Position determination module, for returning result according to classification result and bbox, the target is calculated The face location of image.
In the device of the recognition of face of one of embodiment, the merging treatment module includes:
Pond unit, pond processing is carried out respectively for the convolution results to predetermined number;
Normalization unit, for carrying out the normalized of L2 norms to each pond result;
Combining unit, for being merged to the convolution results after normalized;
Network channel adjustment unit, for carrying out proportion adjustment and network channel to the convolution results after merging treatment Matching and modification, obtain the final external treatment image.
In the device of the recognition of face of one of embodiment, described device also includes:
Full connection processing module, for the final external treatment image to be carried out into full junction by two full articulamentums Reason.
In the device of the recognition of face of one of embodiment, described image receiving module, the process of convolution module, institute State merging treatment module and the position determination module is included in human face recognition model, described device also includes model training mould Block, for being trained to the human face recognition model, the configuration parameter in the model is determined, forms final recognition of face mould Type;The model training module is first
Initial model is trained in the first default pictures, obtains initial training model;
The obtained initial training model is finely adjusted according to the second default pictures again, obtains recognition of face mould Type;And
The quantity that described second default pictures include picture is less than the described first default pictures, the described second default figure The picture standard information mark that piece is concentrated is more accurate.
Beneficial effects of the present invention include:The method of a kind of recognition of face provided by the invention, to initial target image Multilayer process of convolution is carried out, and the characteristics of image after different levels process of convolution is merged, obtains final external treatment figure Picture.Relative to directly using the image of the most multilayer process of convolution after multilayer process of convolution as final external treatment image.Its The image detail information after middle process of convolution is contained in the last final external treatment image of generation, forms multiple dimensioned spy Sign fusion, the description for former target image is more accurate, so as to which recognition of face, position determination are also just more accurate.
Brief description of the drawings
Fig. 1 is a kind of flow chart of a specific embodiment of the method for recognition of face of the present invention;
Fig. 2 is a kind of implementation process schematic diagram of a specific embodiment of the method for recognition of face of the present invention;
Fig. 3 is a kind of structural representation of a specific embodiment of the device of recognition of face of the present invention;
Fig. 4 is merging treatment modular structure schematic diagram in an a kind of specific embodiment of the device of recognition of face of the present invention;
Fig. 5 is that a kind of another specific embodiment of the device of recognition of face of the present invention forms schematic diagram;
Fig. 6 is a kind of composition schematic diagram of the device still another embodiment of recognition of face of the present invention.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, below in conjunction with accompanying drawing to of the invention real The embodiment for applying the method and device of the Face datection of example illustrates.It is it should be appreciated that described herein specific real Example is applied only to explain the present invention, is not intended to limit the present invention.
As shown in figure 1, in the method for the recognition of face of wherein one embodiment, comprise the following steps:
S100, obtain the target image that carry out recognition of face.
This method typically realizes that input first will carry out recognition of face when carrying out recognition of face by computer program Image, after getting image information, the formal program for opening recognition of face.
S200, multilayer process of convolution is carried out to target image.This step is to carry out multilayer to the target image of initial input Process of convolution.The number of plies of process of convolution can be pre-set according to the disposal ability and desired processing accuracy of computer. Referring to Fig. 2, three layers, 7 process of convolution have been carried out in the present embodiment.
S300, merge the pond result of the convolution Feature Mapping of predetermined number in multilayer process of convolution, obtain final outside Handle image.
This step is that have important difference with conventional face's identification, and for traditional Faster RCNN networks, RoI ponds exist Final feature figure layer performs, i.e., to the target image after multilayer process of convolution as in last can characteristic image progress pond Change is handled, and generates the feature in the region.This method is not always optimal, there may come a time when to omit some key characters, because The convolutional layer of deep layer includes more macroscopic informations of artwork;And middle convolution results include more local details.In this step The pond result of convolution Feature Mapping after the process of convolution of multiple different levels is attached merging, obtained at final outside Manage image.After multiple convolutional layer results are combined, make that more details information can be included in final external image, for The description of target image can be more accurate.
S400, final external treatment image is classified.Namely Class prediction steps in Fig. 2, it can make Realized with softmax.This step is one of optimization aim.
The two of optimization aim are step S500, and carrying out bbox to final external treatment image returns to processing, namely in Fig. 2 Bbox regression steps.
S600, result is returned according to classification result and bbox, the face position of the target image is calculated Put.
It should be noted that step S300~S500 can be handled using the step similar to conventional face's identification, most The location of face and face in inputted target image is determined eventually.Referring to Fig. 2, by Class prediction and Bbox regression result, which is combined, can be obtained by face location.
The method of the recognition of face of the present embodiment, carries out multilayer process of convolution to initial target image, and by different layers Characteristics of image after secondary process of convolution merges, and obtains final external treatment image.At directly will be through multilayer convolution The image of most multilayer process of convolution after reason is as final external treatment image.Wrapped in the final external treatment image of the present embodiment The image detail information after middle process of convolution is contained, has formed multi-scale feature fusion, the description for former target image is more accurate Really, so as to which recognition of face, position determination are also just more accurate.
In one of the embodiments, as shown in figure 3, step S300, the volume of predetermined number in multilayer process of convolution is merged The pond result of product Feature Mapping, obtains final external treatment image and comprises the following steps:
S310, pond processing is carried out respectively to the convolution results of predetermined number.
Referring to Fig. 2, handled in the embodiment comprising three-layer coil product, first layer process of convolution layer has carried out a process of convolution (convolution 3_3), second layer process of convolution layer have carried out cubic convolution calculating, respectively convolution 4_1, convolution 4_2 and convolution 4_3, Third layer process of convolution layer has also carried out three-layer coil calculating, respectively convolution 5_1, convolution 5_2 and convolution 5_3.The present embodiment In face identification method, outside extraction most multilayer volume and result, namely third layer convolution processing result, first is also extracted Layer and second layer convolution processing result feature, and the convolution processing result feature to being extracted is handled, with according to each The result of image after layer process of convolution obtains final external treatment image.Make to include in final external treatment image The local detail information of middle convolutional layer.
, can be according to image procossing essence for the quantity for the convolution results extracted, namely the selection of the present count value Thin degree requirement and the number of plies decision that process of convolution is carried out to image., can be with as shown in Fig. 2 carrying as a kind of embodiment Take the feature of every layer of process of convolution.It is special that the several process of convolution of certain in multilayer process of convolution layer can also be chosen in other embodiments Sign.Such as referring to Fig. 2, first layer and third layer process of convolution feature can be only extracted to synthesize final external image.Certainly, most preferably , when convolution processing result feature extraction is carried out, most multilayer convolution processing result feature is chosen, in corresponding diagram 2 Third layer convolution processing result feature.
To sum up, typically settable predetermined number is 3,2.And the convolutional layer result to be extracted can be set.
The convolution results for extracting from multiple lower level convolutional layers carry out pond, obtain every layer of result feature ROI- pooled。
S320, the normalized of L2 norms is carried out to each pond result.Referring to Fig. 2, different layers process of convolution is extracted As a result after, according to the quantity of extraction convolution results, a point a plurality of processing path is respectively processed.It is easy to after normalized pair The result in different paths carries out image merging.
Specifically, ROI-pool is uniformly to become the convolution results of arbitrary size (convolution results are a higher-dimension tensors) Specify size (and a higher-dimension tensor).Such as a 4x4 tensor is become 2x2, it is exactly four the 4x4 upper left corner Number takes maximum, becomes a number in the 2x2 upper left corner;4 numbers in the 4x4 upper right corner become the number in the 2x2 upper right corner, with such Push away.L2 regularizations are that the tensor that will be obtained after roi-pool is L2-normalization, and this is one in mathematics basic Concept, it is exactly specifically that each number in tensor is multiplied by a constant so that the quadratic sum of all numbers is 1.
S330, the convolution results after normalized are merged.
S340, proportion adjustment and network channel Matching and modification are carried out to the convolution results after merging treatment, obtain institute State final external treatment image.
Specifically, proportion adjustment refers to, will be re-scaled after the feature merging treatment of mulitpath, it is original to match Characteristic ratio.Because when assuming that multi-scale feature fusion is not added without rudimentary convolutional layer result feature, it is output to down The enough pond results of only top convolution of one processing step, now, the mould of the tensor of input is N.Spy has been in this method After sign fusion, the mould for the tensor for merging to obtain is M, then this tensor will be multiplied by a number, and make its moding into N.Letter For be exactly that a constant is multiplied by the tensor of merging, make the tensor of final external treatment image with only utilizing a most tomographic images The final external treatment image tensor of feature is identical.
Matched for network channel, the number of channels of primitive network can be matched with 1x1 convolution.Namely use volume 1 × 1 The convolution results after product comparative example regulation are handled, and obtain port number and the target image network channel number identical Final external treatment image.
Referring to Fig. 2, complete the matching of network channel number and then carry out final external image to be classified and returned processing Before, in addition to two full articulamentums of Fc6 and Fc7.
In actual face recognition process, the step carries out multilayer process of convolution to the target image;Step merges The pond result of the convolution Feature Mapping of predetermined number in the multilayer process of convolution, obtain final external treatment image, and step Suddenly the final external treatment image is classified and position refine handles to obtain the face location of the target image, comprising In human face recognition model.Before recognition of face is carried out to image, first have to be trained model, determine in model Parameter is formed, and accuracy of identification is adjusted.The step of appointing bright identification model generation training in this method, includes:
S010, initial model is trained in the first default pictures, obtains initial training model.
S020, the obtained initial training model is finely adjusted according to the second default pictures, obtains recognition of face Model.
It should be noted that the quantity that the described second default pictures include picture is less than the described first default pictures, Picture standard information mark in described second default pictures is more accurate.Such as:Using one in million picture classification data The model of pre-training on collection, first it is trained on a face picture data set up to ten thousand, optionally can then uses again One more accurate small data set of mark is finely adjusted.
In addition, in order to improve the degree of accuracy of model, during model training, hard negative can be added The training of mining negative samples, the modes such as multiple dimensioned training are used to reach the purpose of raising model accuracy.
In an instantiation, initial model will be trained on WIDERFACE data sets, obtained initial first Training pattern.And the initial model is the model of classification pre-training on ImageNet.Obtained initial training model is entered again Row Hard negative mining negative samples are trained, and obtain middle trained model.Finally by middle trained model in FDDB numbers According to being finely adjusted on collection, human face recognition model is obtained.And used during finely tuning and use multiple dimensioned training.
Based on same inventive concept, the embodiment of the present invention provides a kind of device of recognition of face, because this device solves to ask The principle of topic is similar to a kind of foregoing method of recognition of face, and therefore, the implementation of the device can be according to the specific of preceding method Step is realized, is repeated part and is repeated no more.
As shown in figure 3, a kind of device of recognition of face of one of embodiment, including image receiver module 100, convolution Processing module 200, merging treatment module 300, classification processing module 400, recurrence processing module 500 and position determination module 600. Wherein, described image receiving module 100, the target image of recognition of face is carried out for obtaining;The process of convolution module, use Multilayer process of convolution is carried out in 200 pairs of target images;The merging treatment module 300, for merging the multilayer convolution The pond result of the convolution Feature Mapping of predetermined number in processing, obtains final external treatment image;The classification processing module 400, for classifying to the final external image;The recurrence processing module 500, for the final external treatment Image carries out bbox recurrence processing;The position determination module 600, for returning processing knot according to classification result and bbox Fruit, the face location of the target image is calculated.
The device of the recognition of face of the present embodiment, carries out multilayer process of convolution to initial target image, and by different layers Characteristics of image after secondary process of convolution merges, and obtains final external treatment image.At directly will be through multilayer convolution The image of most multilayer process of convolution after reason is as final external treatment image.Wrapped in the final external treatment image of the present embodiment The image detail information after middle process of convolution is contained, has formed multi-scale feature fusion, the description for former target image is more accurate Really, so as to which recognition of face, position determination are also just more accurate.
As shown in figure 4, in the device of the recognition of face of wherein one embodiment, the merging treatment module 300 includes Pond unit 310, normalization unit 320, combining unit 330 and network channel adjustment unit 340.Wherein, the pond unit 310, pond processing is carried out respectively for the convolution results to predetermined number;The normalization unit 320, for each described Pond result carries out the normalized of L2 norms;The combining unit 330, for entering to the convolution results after normalized Row merges;The network channel adjustment unit 340, for carrying out proportion adjustment and net to the convolution results after merging treatment Network passage Matching and modification, obtain the final external treatment image.It is special that the face identification device of the present embodiment connects multiple convolution The pond result of mapping is levied, to obtain final external treatment image.Specifically, the feature point from multiple lower level convolutional layers It is not ROI-pooled and L2 regularizations.Then the feature of these results is merged, and re-scaled, with the original of matching characteristic Ratio.Then 1x1 convolution is applied to match the number of channels of primitive network.
As shown in figure 5, in the device of the recognition of face of another embodiment, in addition to full connection processing module 301, it is used for Final external treatment image after the merging treatment resume module is subjected to full connection processing by two full articulamentums.
As shown in fig. 6, in one of the embodiments, described image receiving module 100, the process of convolution module 200, The merging treatment module 300 and the position determination module 400 are included in human face recognition model.The dress of the recognition of face Also include model training module 001 in putting, for being trained to the human face recognition model, determine the configuration in the model Parameter, form final human face recognition model;The model training module 001 is first by initial model in the first default pictures It is trained, obtains initial training model;The obtained initial training model is carried out according to the second default pictures again micro- Adjust, obtain human face recognition model.And described second the default pictures quantity that includes picture be less than the described first default pictures, Picture standard information mark in described second default pictures is more accurate.
One of ordinary skill in the art will appreciate that realize all or part of flow in above-described embodiment method, being can be with The hardware of correlation is instructed to complete by computer program, described program can be stored in a computer read/write memory medium In, the program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, described storage medium can be magnetic Dish, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random Access Memory, RAM) etc..
Embodiment described above only expresses the several embodiments of the present invention, and its description is more specific and detailed, but simultaneously Therefore the limitation to the scope of the claims of the present invention can not be interpreted as.It should be pointed out that for one of ordinary skill in the art For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the guarantor of the present invention Protect scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.

Claims (10)

  1. A kind of 1. method of recognition of face, it is characterised in that including:
    The target image of recognition of face will be carried out by obtaining;
    Multilayer process of convolution is carried out to the target image;
    Merge the pond result of the convolution Feature Mapping of predetermined number in the multilayer process of convolution, obtain final external treatment figure Picture;
    The final external treatment image is classified;
    Bbox recurrence processing is carried out to the final external treatment image;
    Result is returned according to classification result and bbox, the face location of the target image is calculated.
  2. 2. the method for recognition of face according to claim 1, it is characterised in that described to merge in the multilayer process of convolution The pond result of the convolution Feature Mapping of predetermined number, obtaining final external treatment image includes:
    Pond processing is carried out respectively to the convolution results of predetermined number;
    The normalized of L2 norms is carried out to each pond result;
    Convolution results after normalized are merged;
    Proportion adjustment and network channel Matching and modification are carried out to the convolution results after merging treatment, obtained described final outside Handle image.
  3. 3. the method for recognition of face according to claim 2, it is characterised in that the convolution to after merging treatment As a result ratio adjustment and network channel Matching and modification are carried out, obtains the final external treatment image, including:
    The convolution results after being adjusted using 1 × 1 convolution comparative example are handled, and obtain port number and the target image net The final external treatment image of network port number identical.
  4. 4. the method for recognition of face according to claim 1, it is characterised in that enter to the final external treatment image It is further comprising the steps of before row classification and bbox recurrence processing:
    The final external treatment image is subjected to full connection processing by least one full articulamentum.
  5. 5. the method for recognition of face according to claim 1, it is characterised in that the step is carried out to the target image Multilayer process of convolution;Step merges the pond result of the convolution Feature Mapping of predetermined number in the multilayer process of convolution, obtains Final external treatment image, and step is classified by the final external treatment image and position refine handles to obtain the mesh The face location of logo image, included in human face recognition model, methods described also includes human face recognition model generation training Step;
    The step of human face recognition model generation training, includes:
    Initial model is trained in the first default pictures, obtains initial training model;
    The obtained initial training model is finely adjusted according to the second default pictures, obtains human face recognition model;
    The quantity that described second default pictures include picture is less than the described first default pictures, and the described second default picture The picture standard information mark of concentration is more accurate.
  6. 6. the method for recognition of face according to claim 5, it is characterised in that generate and train in the human face recognition model The step of in add the training of hard negative mining negative samples, and/or be trained using multiple dimensioned training method.
  7. A kind of 7. device of recognition of face, it is characterised in that including:
    Image receiver module, the target image of recognition of face is carried out for obtaining;
    Process of convolution module, for carrying out multilayer process of convolution to the target image;
    Merging treatment module, for merging the pond result of the convolution Feature Mapping of predetermined number in the multilayer process of convolution, Obtain final external treatment image;
    Classification processing module, for classifying to the final external image;
    Processing module is returned, for carrying out bbox recurrence processing to the final external treatment image;
    Position determination module, for returning result according to classification result and bbox, the target image is calculated Face location.
  8. 8. the device of recognition of face according to claim 7, it is characterised in that the merging treatment module includes:
    Pond unit, pond processing is carried out respectively for the convolution results to predetermined number;
    Normalization unit, for carrying out the normalized of L2 norms to each pond result;
    Combining unit, for being merged to the convolution results after normalized;
    Network channel adjustment unit, for the convolution results after merging treatment to be carried out with proportion adjustment and network channel matching Adjustment, obtain the final external treatment image.
  9. 9. the device of recognition of face according to claim 7, it is characterised in that described device also includes:
    Full connection processing module, for the final external treatment image to be carried out into full connection processing by two full articulamentums.
  10. 10. the device of recognition of face according to claim 7, it is characterised in that described image receiving module, the convolution Processing module, the merging treatment module and the position determination module are included in human face recognition model, and described device is also wrapped Model training module is included, for being trained to the human face recognition model, the configuration parameter in the model is determined, is formed most Whole human face recognition model;Initial model is trained by the model training module in the first default pictures first, is obtained Initial training model;
    The obtained initial training model is finely adjusted according to the second default pictures again, obtains human face recognition model;And
    The quantity that described second default pictures include picture is less than the described first default pictures, the described second default pictures In picture standard information mark it is more accurate.
CN201710672535.7A 2017-08-08 2017-08-08 The method and device of Face datection Pending CN107463906A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710672535.7A CN107463906A (en) 2017-08-08 2017-08-08 The method and device of Face datection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710672535.7A CN107463906A (en) 2017-08-08 2017-08-08 The method and device of Face datection

Publications (1)

Publication Number Publication Date
CN107463906A true CN107463906A (en) 2017-12-12

Family

ID=60547664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710672535.7A Pending CN107463906A (en) 2017-08-08 2017-08-08 The method and device of Face datection

Country Status (1)

Country Link
CN (1) CN107463906A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446724A (en) * 2018-03-12 2018-08-24 江苏中天科技软件技术有限公司 A kind of fusion feature sorting technique
CN108492297A (en) * 2017-12-25 2018-09-04 重庆理工大学 The MRI brain tumors positioning for cascading convolutional network based on depth and dividing method in tumor
CN109034210A (en) * 2018-07-04 2018-12-18 国家新闻出版广电总局广播科学研究院 Object detection method based on super Fusion Features Yu multi-Scale Pyramid network
CN109165583A (en) * 2018-08-09 2019-01-08 北京飞搜科技有限公司 More size fusion method for detecting human face, device and storage medium
CN109344779A (en) * 2018-10-11 2019-02-15 高新兴科技集团股份有限公司 A kind of method for detecting human face under ring road scene based on convolutional neural networks
CN109446922A (en) * 2018-10-10 2019-03-08 中山大学 A kind of method for detecting human face of real-time robust
CN110135223A (en) * 2018-02-08 2019-08-16 浙江宇视科技有限公司 Method for detecting human face and device
CN110660074A (en) * 2019-10-10 2020-01-07 北京同创信通科技有限公司 Method for establishing steel scrap grade division neural network model
CN113408325A (en) * 2020-03-17 2021-09-17 北京百度网讯科技有限公司 Method and device for identifying surrounding environment of vehicle and related equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156744A (en) * 2016-07-11 2016-11-23 西安电子科技大学 SAR target detection method based on CFAR detection with degree of depth study
CN106485268A (en) * 2016-09-27 2017-03-08 东软集团股份有限公司 A kind of image-recognizing method and device
CN106503617A (en) * 2016-09-21 2017-03-15 北京小米移动软件有限公司 Model training method and device
CN106683087A (en) * 2016-12-26 2017-05-17 华南理工大学 Coated tongue constitution distinguishing method based on depth neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156744A (en) * 2016-07-11 2016-11-23 西安电子科技大学 SAR target detection method based on CFAR detection with degree of depth study
CN106503617A (en) * 2016-09-21 2017-03-15 北京小米移动软件有限公司 Model training method and device
CN106485268A (en) * 2016-09-27 2017-03-08 东软集团股份有限公司 A kind of image-recognizing method and device
CN106683087A (en) * 2016-12-26 2017-05-17 华南理工大学 Coated tongue constitution distinguishing method based on depth neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SHENXIAOLU1984: ""目标检测 Faster RCNN 算法详解"", 《CSDN 博客》 *
惠国保: ""基于深层神经网络的军事目标图像分类技术"", 《现代导航》 *
王勇等: ""基于稀疏自编码深度神经网络的林火图像分类"", 《计算机工程与应用》 *
贾峻苏等: ""基于可变形部件模型的安全头盔佩戴检测"", 《计算机应用研究》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108492297A (en) * 2017-12-25 2018-09-04 重庆理工大学 The MRI brain tumors positioning for cascading convolutional network based on depth and dividing method in tumor
CN110135223A (en) * 2018-02-08 2019-08-16 浙江宇视科技有限公司 Method for detecting human face and device
CN108446724A (en) * 2018-03-12 2018-08-24 江苏中天科技软件技术有限公司 A kind of fusion feature sorting technique
CN109034210A (en) * 2018-07-04 2018-12-18 国家新闻出版广电总局广播科学研究院 Object detection method based on super Fusion Features Yu multi-Scale Pyramid network
CN109034210B (en) * 2018-07-04 2021-10-12 国家新闻出版广电总局广播科学研究院 Target detection method based on super-feature fusion and multi-scale pyramid network
CN109165583A (en) * 2018-08-09 2019-01-08 北京飞搜科技有限公司 More size fusion method for detecting human face, device and storage medium
CN109165583B (en) * 2018-08-09 2021-01-05 苏州飞搜科技有限公司 Multi-size fusion face detection method and device and storage medium
CN109446922A (en) * 2018-10-10 2019-03-08 中山大学 A kind of method for detecting human face of real-time robust
CN109446922B (en) * 2018-10-10 2021-01-08 中山大学 Real-time robust face detection method
CN109344779A (en) * 2018-10-11 2019-02-15 高新兴科技集团股份有限公司 A kind of method for detecting human face under ring road scene based on convolutional neural networks
CN110660074A (en) * 2019-10-10 2020-01-07 北京同创信通科技有限公司 Method for establishing steel scrap grade division neural network model
CN113408325A (en) * 2020-03-17 2021-09-17 北京百度网讯科技有限公司 Method and device for identifying surrounding environment of vehicle and related equipment

Similar Documents

Publication Publication Date Title
CN107463906A (en) The method and device of Face datection
Kao et al. Hierarchical aesthetic quality assessment using deep convolutional neural networks
CN110110715A (en) Text detection model training method, text filed, content determine method and apparatus
Kao et al. Visual aesthetic quality assessment with a regression model
CN107844784A (en) Face identification method, device, computer equipment and readable storage medium storing program for executing
CN106504064A (en) Clothes classification based on depth convolutional neural networks recommends method and system with collocation
CN107506740A (en) A kind of Human bodys' response method based on Three dimensional convolution neutral net and transfer learning model
WO2019179403A1 (en) Fraud transaction detection method based on sequence width depth learning
CN111738243B (en) Method, device and equipment for selecting face image and storage medium
CN108921058A (en) Fish identification method, medium, terminal device and device based on deep learning
CN104462494B (en) A kind of remote sensing image retrieval method and system based on unsupervised feature learning
CN108009222B (en) Three-dimensional model retrieval method based on better view and deep convolutional neural network
CN106845529A (en) Image feature recognition methods based on many visual field convolutional neural networks
CN107169463A (en) Method for detecting human face, device, computer equipment and storage medium
CN106778852A (en) A kind of picture material recognition methods for correcting erroneous judgement
CN107122796A (en) A kind of remote sensing image sorting technique based on multiple-limb network integration model
CN107506786A (en) A kind of attributive classification recognition methods based on deep learning
CN103745201B (en) A kind of program identification method and device
CN106339719A (en) Image identification method and image identification device
CN107506793A (en) Clothes recognition methods and system based on weak mark image
CN111401521B (en) Neural network model training method and device, and image recognition method and device
CN107545271A (en) Image-recognizing method, device and system
CN111582397A (en) CNN-RNN image emotion analysis method based on attention mechanism
CN111274981B (en) Target detection network construction method and device and target detection method
CN109858487A (en) Weakly supervised semantic segmentation method based on watershed algorithm and image category label

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20171212

WD01 Invention patent application deemed withdrawn after publication