CN107066955A - A kind of method that whole face is reduced from local facial region - Google Patents

A kind of method that whole face is reduced from local facial region Download PDF

Info

Publication number
CN107066955A
CN107066955A CN201710181236.3A CN201710181236A CN107066955A CN 107066955 A CN107066955 A CN 107066955A CN 201710181236 A CN201710181236 A CN 201710181236A CN 107066955 A CN107066955 A CN 107066955A
Authority
CN
China
Prior art keywords
facial image
face
dictionary
local
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710181236.3A
Other languages
Chinese (zh)
Other versions
CN107066955B (en
Inventor
姚琪
卓越
罗畅
刘靖峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Vision Information Technology Co Ltd
Original Assignee
Wuhan Vision Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Vision Information Technology Co Ltd filed Critical Wuhan Vision Information Technology Co Ltd
Priority to CN201710181236.3A priority Critical patent/CN107066955B/en
Publication of CN107066955A publication Critical patent/CN107066955A/en
Application granted granted Critical
Publication of CN107066955B publication Critical patent/CN107066955B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • G06V40/173Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a kind of method that whole face is reduced from local facial region, synchronously two dictionaries associated are obtained with local incomplete face training for whole face using K svd algorithms.In actual applications by intersecting two dictionaries of inquiry, complete face is reduced, this method can be effectively improved in actual applications when the personal band mouth mask being detected, masked, or recognition accuracy of other modes when block a part for face.

Description

A kind of method that whole face is reduced from local facial region
Technical field
The present invention relates to technical field of face recognition, and in particular to a kind of method that face is reduced from regional area.
Background technology
In smart city, in security protection and public security technical search, recognition of face is conventional artificial intelligence technology means.Work as quilt People is scouted with mouth mask, it is masked, or when taking other means shield portions faces, existing face recognition technology can not be detected Recognition of face is carried out to the object scouted, because existing face recognition technology needs whole face to be identified.
The typical technology that recognition of face is recognized as biometric identity, due to cooperating with one's own initiative for individual need not be detected, closely In man-machine interaction over year, security protection, authentication, amusement, and got a lot of applications in terms of Medical nursing.Face recognition technology Including:Face datection, feature extraction and characteristic matching and classification.The method of Face datection has:HARR is scanned, HOG scannings, ADABOOT learns, deep learning CNN object detections etc..The method of feature extraction has:The intrinsic faces of PCA, deep learning CNN features Extract etc..Characteristic matching and classification include:1-NN, k-NN and SVM.By various Face datections above-mentioned, feature extraction and The method of characteristic matching is organically combined, it is possible to obtain face recognition technology general at present.
Face datection and feature extraction in existing face recognition technology are all based on whole face.And in actual life In, it is masked when the personal band mouth mask being detected, or other modes are when block a part for face, existing recognition of face Technology will recognition failures.Therefore a kind of method that recognition of face can be carried out by face's local feature is needed, is improved in reality It is masked when the personal band mouth mask being detected in the application of border, or identification of other modes when block a part for face is accurate Rate.
The content of the invention
For problem of the prior art, the present invention proposes a kind of method that whole face is reduced from local facial region, profit Synchronously two dictionaries associated are obtained with K-SVD algorithms with local incomplete face training for whole face.Lead in actual applications The complete face of intersection inquiry reduction is crossed, this method can be effectively improved in actual applications when the personal band mouth mask being detected, masked, Or recognition accuracy of other modes when block a part for face.
The present invention is for the technical scheme that is used of solution above-mentioned technical problem:
The present invention provides a kind of method that whole face is reduced from local facial region, comprises the following steps:
S1, training stage
Facial image in face training set is subjected to gray processing and illumination equalization processing, and carries out landmark marks Note, the corresponding image Y and corresponding image Y in local facial of the complete face of generation every;Further according to K-SVD algorithms, sky is utilized Dictionary is as initial dictionary, with complete facial image Y and local facial image YAs the input of dictionary, dictionary is synchronized Training, the corresponding dictionary D of the corresponding dictionary D of complete facial image and part facial image of the optimization that obtains being mutually related
S2, recognizes reduction phase
Blocked or incomplete target facial image center selects local facial part having, carry out gray processing and illumination is equal Landmark marks are carried out after weighing apparatusization processing, the corresponding image Y of localized target face is obtained', by image Y' it is used as dictionary D Input, according toObtain localized target facial image Y' corresponding sparse coefficient X, then sparse coefficient X is inputted into word The whole face Y' that allusion quotation D is resumed according to Y=DX Queries.
The beneficial effects of the invention are as follows:
The present invention proposes a kind of eye areas from the face being blocked and restores/guess whole face, so that Existing face recognition technology still can be used in the case where face is blocked.
Brief description of the drawings
Fig. 1 is the method that whole face is reduced from local facial realized by taking ocular as an example.
Fig. 2 tested for the face recovered with original complete face and from eye areas after ROC comparison diagrams
Embodiment
Below in conjunction with the accompanying drawings and embodiment the invention will be further described.
The present invention provides a kind of method that whole face is reduced from local facial region, by taking ocular as an example, including with Lower step:
S1, training stage
Facial image in face training set is subjected to gray processing and illumination equalization processing, and carries out landmark marks Note, the corresponding image Y and corresponding image Y of ocular of the complete face of generation every;Further according to K-SVD algorithms, sky is utilized Dictionary is as initial dictionary, with complete facial image Y and ocular image YAs the input of dictionary, dictionary is synchronized Training, the corresponding dictionary D of the complete facial image dictionary Ds corresponding with ocular image for the optimization that obtains being mutually relatedWith And public sparse coefficient X.
Specifically include following sub-step:
S101, crawls at least 1,000,000 face pictures, or obtain by the police using web crawlers technology from internet Take face picture;Original face picture is more in training set, then the dictionary obtained after training is accurate;Gray scale is carried out to face picture Change and illumination equalization processing, and landmark marks are carried out to face picture using histogram of gradients HOG algorithms, generated The corresponding image Y of face of giving sb. a hard time;
S102, for every complete face picture, manual frame selects ocular part and non-frame is selected into part face Gray scale zero setting, generates local facial image Y
S103, by the use of empty dictionary as initial dictionary, by complete facial image Y and its corresponding ocular image YMake For the input of dictionary, solution formula
Obtain complete facial image Y and ocular image YCorresponding optimization dictionary D and D;Wherein β is eye area The weighted value of domain training, value 80~150 so that the part that is not blocked, i.e. ocular are more partial in training;Sparse coefficient Rank between 20~50.
S2, recognizes reduction phase, as shown in figure 1,
Blocked or incomplete target facial image center selects ocular part having, carry out gray processing and illumination is equal Landmark marks are carried out after weighing apparatusization processing, the corresponding image Y of target face ocular is obtained', by image Y' it is used as word Allusion quotation DInput, according toObtain target face ocular image Y' corresponding sparse coefficient X, then by sparse system The whole face Y' that number X inputs dictionary D is resumed according to Y=DX Queries.
Specifically include following sub-step:
S201, frame selects the target facial image blocked, carries out gray processing and illumination equalization processing;Utilize gradient Histogram HOG algorithms carry out landmark marks to target facial image, retain target face ocular image pixel gray level It is worth and by the gray scale zero setting of shield portions pixel, obtains the corresponding image Y of target face ocular';
S202, by target face ocular image Y' input dictionary DInquired about, according toObtain target Face ocular image Y' corresponding sparse coefficient X;
S203, by target face ocular image Y' corresponding sparse coefficient X inputs dictionary D, carry out Query and obtain To target face ocular image Y' corresponding complete facial image Y'.
Preferably, in actual use to eye areas by the different dictionary of up and down 10%, 0, -10% generation, and point Three complete faces are not recovered by different dictionaries and improve the accuracy recovered.
Comprise the following steps that:
Training stage:
S101 ', crawls a large amount of face pictures, or obtain face by the police using web crawlers technology from internet Picture;Gray processing and illumination equalization processing are carried out to face picture, and using histogram of gradients HOG algorithms to face picture Landmark marks are carried out, the corresponding image Y of complete face is generated;
S102 ', for every complete face picture, frame selects ocular and retains ocular image pixel gray level value Non- frame is selected to the gray scale zero setting of part, generation ocular image Y simultaneously∧0;On the basis of the ocular that frame is selected upwards partially The 10% of eye-level is moved, the image pixel gray level value of selected portion after skew is retained and by the other parts outside selected areas The gray scale zero setting of pixel, the ocular image Y offset up∧1';Eyes are offset downward on the basis of ocular high The 10% of degree, the image pixel gray level value of selected areas after skew is retained and by the ash of the other parts pixel outside selected areas Spend zero setting, the localized target facial image Y offset downward∧2';
S103 ', by the use of empty dictionary as initial dictionary, by complete facial image Y and its corresponding ocular image Y∧0 It is used as the input of dictionary, solution formula (1)
Obtain complete facial image Y and ocular image Y∧0Corresponding optimization dictionary D0And D∧0
Similarly,
By the complete facial image Y and its corresponding ocular image Y offset up∧1As the input of dictionary, solve Formula (1) obtains complete facial image Y and ocular image Y∧1Corresponding optimization dictionary D1And D∧1
By the complete facial image Y and its corresponding ocular image Y offset downward∧2As the input of dictionary, solve Formula (1) obtains complete facial image Y and ocular image Y∧2Corresponding optimization dictionary D2And D∧2
Recognize reduction phase:
S201 ', frame selects the target facial image blocked, carries out gray processing and illumination equalization processing;Utilize ladder Spend histogram HOG algorithms and landmark marks are carried out to target facial image, reservation ocular image pixel gray level value simultaneously will The gray scale zero setting of shield portions pixel, obtains target face ocular image Y';
S202 ', by target face ocular image Y' dictionary D is inputted respectively∧0、D∧1、D∧2Inquired about, according toObtain target face ocular image Y' corresponding sparse coefficient X0、X1、X2
S203 ', by target face ocular image Y' corresponding sparse coefficient X0、X1、X2Dictionary D is inputted respectively0、D1、 D2, carry out Query and obtain target face ocular image Y' may corresponding complete facial image Y0'、Y1'、Y2';
Obtain being resumed after whole face Y, we can just be inputted current existing face recognition software, enter to advance The identification of one step, or by manually being distinguished.
Fig. 2 is the ROC (subjects after the face recovered with original complete face and from eye areas is tested Performance curve) contrast, dotted line is the effect of original whole recognition of face in figure, and solid line is from whole after eye portion recovery Open the effect of recognition of face.
If it is other regions to be blocked, as long as YRemaining non-shield portions are changed to, Y is whole face, reappear and use K-SVD algorithms are to two dictionaries D and DIt is trained, other regions that our method can be blocked for recovery, example Such as in VR (reality-virtualizing game), the eyes of game people are blocked, and face's other parts are exposed, thus can be according to people The other parts of face recover eyes.
The above method can use integrated circuit, and flush type circuit and cloud server software are realized.
The part not illustrated in specification is prior art or common knowledge.The present embodiment is merely to illustrate the invention, Rather than limitation the scope of the present invention, those skilled in the art change for equivalent replacement of the invention made etc. to be considered Fall into invention claims institute protection domain.

Claims (9)

1. a kind of method that whole face is reduced from local facial region, it is characterised in that:Comprise the following steps:
S1, training stage
Face is trained the facial image in set carry out gray processing and illumination equalization processing, and carry out landmark marks Note, the corresponding image Y and corresponding image Y in local facial of the complete face of generation every;Further according to K-SVD algorithms, sky is utilized Dictionary is as initial dictionary, with complete facial image Y and local facial image YAs the input of dictionary, dictionary is synchronized Training, the corresponding dictionary D of the corresponding dictionary D of complete facial image and part facial image of the optimization that obtains being mutually related
S2, reduces cognitive phase
Blocked or incomplete target facial image center selects local facial part having, carry out gray processing and illumination equalization Landmark marks are carried out after processing, the corresponding image Y of localized target face is obtained', by image Y' it is used as dictionary DIt is defeated Enter, according toObtain localized target facial image Y' corresponding sparse coefficient X, then sparse coefficient X is inputted into dictionary D Whole face Y' being resumed according to Y=DX Queries.
2. a kind of method that whole face is reduced from local facial region according to claim 1, it is characterised in that:It is described Step S1 specifically includes following steps:
S101, a large amount of face pictures are crawled using web crawlers technology from internet, or obtain face picture by the police, Number of pictures is more than million grades;Gray processing and illumination equalization processing are carried out to face picture, and utilize histogram of gradients HOG algorithms and SVM carry out landmark marks to face picture, generate the corresponding image Y of complete face;
S102, for every complete face picture, frame selects local feature part and non-frame is selected to the gray scale zero setting of part face, Generate local facial image Y
S103, by the use of empty dictionary as initial dictionary, by complete facial image Y and its corresponding local facial image YIt is used as word The input of allusion quotation, solution formula
Obtain complete facial image Y and local facial image YCorresponding optimization dictionary D and D;Wherein β instructs for local facial Experienced weighted value.
3. a kind of method that whole face is reduced from local facial region according to claim 2, it is characterised in that:It is described Step S2 specifically includes following sub-step:
S201, frame selects the target facial image blocked, carries out gray processing and illumination equalization processing;Utilize gradient Nogata Scheme HOG algorithms and landmark marks are carried out to target facial image, retain local feature parts of images grey scale pixel value and will hide The gray scale zero setting of partial pixel is kept off, the corresponding image Y of localized target face is obtained';
S202, by localized target facial image Y' input dictionary DInquired about, according toObtain localized target face Image Y' corresponding sparse coefficient X;
S203, by localized target facial image Y' corresponding sparse coefficient X inputs dictionary D, carry out Query and obtain local mesh Mark facial image Y' corresponding complete facial image Y'.
4. a kind of method that whole face is reduced from local facial region according to Claims 2 or 3, it is characterised in that: The span of the weighted value β is 80~150, and the rank of the sparse coefficient X is between 20~50.
5. a kind of method that whole face is reduced from local facial region according to claim 1, it is characterised in that:It is described Step S1 specifically includes following sub-step:
S101 ', crawls a large amount of face pictures, or obtain face figure by the police using web crawlers technology from internet Piece, data capacity is more than million grades;Gray processing and illumination equalization processing are carried out to face picture, and utilizes gradient Nogata Scheme HOG algorithms and landmark marks are carried out to face picture, generate the corresponding image Y of complete face;
S102 ', for every complete face picture, frame selects local feature part and retains local feature parts of images pixel ash Angle value simultaneously selects non-frame the gray scale zero setting of part face, generates local facial image Y∧0;The local feature part selected with frame On the basis of offset up designated ratio, the image pixel gray level value of selected portion after skew is retained and by its outside selected areas The gray scale zero setting of his partial pixel, the local facial image Y offset up∧1';On the basis of local feature part downwards Designated ratio is offset, the image pixel gray level value of selected areas after skew is retained and by the other parts pixel outside selected areas Gray scale zero setting, the localized target facial image Y offset downward∧2';
S103 ', by the use of empty dictionary as initial dictionary, by complete facial image Y and its corresponding local facial image Y∧0As The input of dictionary, solution formula (1)
Obtain complete facial image Y and local facial image Y∧0Corresponding optimization dictionary D0And D∧0
Similarly,
By the complete facial image Y and its corresponding local facial image Y offset up∧1It is used as the input of dictionary, solution formula (1) complete facial image Y and local facial image Y are obtained∧1Corresponding optimization dictionary D1And D∧1
By the complete facial image Y and its corresponding local facial image Y offset downward∧2It is used as the input of dictionary, solution formula (1) complete facial image Y and local facial image Y are obtained∧2Corresponding optimization dictionary D2And D∧2
6. a kind of method that whole face is reduced from eye areas according to claim 5, it is characterised in that:
The step S2 specifically includes following sub-step:
S201 ', frame selects the target facial image blocked, carries out gray processing and illumination equalization processing;It is straight using gradient Side's figure HOG algorithms carry out landmark marks to target facial image, and reservation local feature parts of images grey scale pixel value simultaneously will The gray scale zero setting of shield portions pixel, obtains the corresponding image Y of localized target face';
S202 ', by localized target facial image Y' dictionary D is inputted respectively∧0、D∧1、D∧2Inquired about, according to To localized target facial image Y' corresponding sparse coefficient X0、X1、X2
S203 ', by localized target facial image Y' corresponding sparse coefficient X0、X1、X2Dictionary D is inputted respectively0、D1、D2, carry out Query obtains localized target facial image Y' may corresponding complete facial image Y0'、Y1'、Y2'。
7. a kind of method that whole face is reduced from local facial region according to claim 5 or 6, it is characterised in that: The designated ratio refers to the 10% of the height for the local feature part that frame is selected.
8. a kind of method that whole face is reduced from local facial region according to claim 4, it is characterised in that:The party Method is realized by integrated circuit, flush type circuit or cloud server software.
9. a kind of method that whole face is reduced from local facial region according to claim 7, it is characterised in that:The party Method is realized by integrated circuit, flush type circuit or cloud server software.
CN201710181236.3A 2017-03-24 2017-03-24 Method for restoring whole human face from local human face area Active CN107066955B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710181236.3A CN107066955B (en) 2017-03-24 2017-03-24 Method for restoring whole human face from local human face area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710181236.3A CN107066955B (en) 2017-03-24 2017-03-24 Method for restoring whole human face from local human face area

Publications (2)

Publication Number Publication Date
CN107066955A true CN107066955A (en) 2017-08-18
CN107066955B CN107066955B (en) 2020-07-17

Family

ID=59618228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710181236.3A Active CN107066955B (en) 2017-03-24 2017-03-24 Method for restoring whole human face from local human face area

Country Status (1)

Country Link
CN (1) CN107066955B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063506A (en) * 2018-07-09 2018-12-21 江苏达实久信数字医疗科技有限公司 Privacy processing method for medical operating teaching system
WO2019153175A1 (en) * 2018-02-08 2019-08-15 国民技术股份有限公司 Machine learning-based occluded face recognition system and method, and storage medium
CN110457990A (en) * 2019-06-19 2019-11-15 特斯联(北京)科技有限公司 A kind of the safety monitoring video shelter intelligence complementing method and system of machine learning
CN111093029A (en) * 2019-12-31 2020-05-01 深圳云天励飞技术有限公司 Image processing method and related device
CN111353943A (en) * 2018-12-20 2020-06-30 杭州海康威视数字技术股份有限公司 Face image recovery method and device and readable storage medium
CN113222830A (en) * 2021-03-05 2021-08-06 北京字跳网络技术有限公司 Image processing method and device
CN113379683A (en) * 2021-05-24 2021-09-10 北京迈格威科技有限公司 Object detection method, device, equipment and medium
CN113486394A (en) * 2021-06-18 2021-10-08 武汉科技大学 Privacy protection and tamper-proofing method and system based on face block chain

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799870A (en) * 2012-07-13 2012-11-28 复旦大学 Single-training sample face recognition method based on blocking consistency LBP (Local Binary Pattern) and sparse coding
CN105550634A (en) * 2015-11-18 2016-05-04 广东微模式软件股份有限公司 Facial pose recognition method based on Gabor features and dictionary learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799870A (en) * 2012-07-13 2012-11-28 复旦大学 Single-training sample face recognition method based on blocking consistency LBP (Local Binary Pattern) and sparse coding
CN105550634A (en) * 2015-11-18 2016-05-04 广东微模式软件股份有限公司 Facial pose recognition method based on Gabor features and dictionary learning

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019153175A1 (en) * 2018-02-08 2019-08-15 国民技术股份有限公司 Machine learning-based occluded face recognition system and method, and storage medium
CN109063506A (en) * 2018-07-09 2018-12-21 江苏达实久信数字医疗科技有限公司 Privacy processing method for medical operating teaching system
CN109063506B (en) * 2018-07-09 2021-07-06 江苏达实久信数字医疗科技有限公司 Privacy processing method for medical operation teaching system
CN111353943B (en) * 2018-12-20 2023-12-26 杭州海康威视数字技术股份有限公司 Face image recovery method and device and readable storage medium
CN111353943A (en) * 2018-12-20 2020-06-30 杭州海康威视数字技术股份有限公司 Face image recovery method and device and readable storage medium
CN110457990A (en) * 2019-06-19 2019-11-15 特斯联(北京)科技有限公司 A kind of the safety monitoring video shelter intelligence complementing method and system of machine learning
CN110457990B (en) * 2019-06-19 2020-06-12 特斯联(北京)科技有限公司 Machine learning security monitoring video occlusion intelligent filling method and system
CN111093029A (en) * 2019-12-31 2020-05-01 深圳云天励飞技术有限公司 Image processing method and related device
CN111093029B (en) * 2019-12-31 2021-07-06 深圳云天励飞技术有限公司 Image processing method and related device
CN113222830A (en) * 2021-03-05 2021-08-06 北京字跳网络技术有限公司 Image processing method and device
CN113379683A (en) * 2021-05-24 2021-09-10 北京迈格威科技有限公司 Object detection method, device, equipment and medium
CN113486394A (en) * 2021-06-18 2021-10-08 武汉科技大学 Privacy protection and tamper-proofing method and system based on face block chain
CN113486394B (en) * 2021-06-18 2023-05-16 武汉科技大学 Privacy protection and tamper-proof method and system based on face block chain

Also Published As

Publication number Publication date
CN107066955B (en) 2020-07-17

Similar Documents

Publication Publication Date Title
CN107066955A (en) A kind of method that whole face is reduced from local facial region
Liu et al. Transferring deep representation for NIR-VIS heterogeneous face recognition
CN107368831B (en) English words and digit recognition method in a kind of natural scene image
KR102554724B1 (en) Method for identifying an object in an image and mobile device for practicing the method
WO2020228515A1 (en) Fake face recognition method, apparatus and computer-readable storage medium
CN110659589B (en) Pedestrian re-identification method, system and device based on attitude and attention mechanism
Mocanu et al. Deep-see face: A mobile face recognition system dedicated to visually impaired people
CN105718873A (en) People stream analysis method based on binocular vision
KR20030083510A (en) Method for verifying users and updating the data base, and face verification system using thereof
CN113272816A (en) Whole-person correlation for face screening
Wu et al. Race classification from face using deep convolutional neural networks
KR20180038169A (en) Safety classification method of the city image using deep learning-based data feature
CA3050456C (en) Facial modelling and matching systems and methods
CN109074499A (en) The method and system identified again for object
Wang et al. Accurate playground localisation based on multi-feature extraction and cascade classifier in optical remote sensing images
CN112232147B (en) Method, device and system for self-adaptive acquisition of super-parameters of face model
Syambas et al. Image processing and face detection analysis on face verification based on the age stages
CN110929583A (en) High-detection-precision face recognition method
Booysens et al. Exploration of ear biometrics using EfficientNet
Singla et al. Age and gender detection using Deep Learning
Bindu et al. Hybrid features and exponential moth-flame optimization based deep belief network for face recognition
CN108491750B (en) Face recognition method
Gadge et al. Recognition of Indian Sign Language characters using convolutional neural network
CN106981047A (en) A kind of method for recovering high-resolution human face from low resolution face
BR112021014579A2 (en) USER IDENTIFICATION METHOD BY BIOMETRIC CHARACTERISTICS AND MOBILE DEVICE

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant