WO2008151470A1 - Procédé de détection robuste de visage humain dans une image d'arrière-plan compliquée - Google Patents

Procédé de détection robuste de visage humain dans une image d'arrière-plan compliquée Download PDF

Info

Publication number
WO2008151470A1
WO2008151470A1 PCT/CN2007/001893 CN2007001893W WO2008151470A1 WO 2008151470 A1 WO2008151470 A1 WO 2008151470A1 CN 2007001893 W CN2007001893 W CN 2007001893W WO 2008151470 A1 WO2008151470 A1 WO 2008151470A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
sample
training
classifier
Prior art date
Application number
PCT/CN2007/001893
Other languages
English (en)
Chinese (zh)
Inventor
Xiaoqing Ding
Yong Ma
Chi Fang
Changsong Liu
Liangrui Peng
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to PCT/CN2007/001893 priority Critical patent/WO2008151470A1/fr
Publication of WO2008151470A1 publication Critical patent/WO2008151470A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Definitions

  • the present invention relates to the field of face recognition technology, and in particular, to a face detection method in a complex background image.
  • Face detection is the determination of the position and size of a face in an image or image sequence. It is currently widely used in systems such as face recognition, video surveillance, and intelligent human-machine interfaces. Face detection, especially face detection in complex backgrounds, is also a difficult problem. This is due to the appearance of the face, the skin color, the expression, the movement in the three-dimensional space, and the external factors such as beard, hair, glasses, and light, which cause great changes in the face pattern, and because the background object is very complicated, it is difficult to Faces are separated.
  • the mainstream method of face detection is based on the detection method of sample statistical learning.
  • Such methods generally introduce the category of "non-human face".
  • the parameters of the "face” category are distinguished from those of the "non-human face” category, and the parameters of the model are obtained instead of the surface layer based on the visual impression. law.
  • This is more reliable in statistical sense, which not only avoids errors caused by incomplete and inaccurate observations, but also increases the range of detection by increasing the training samples to improve the robustness of the detection system; Most of them use a simple to complex multi-layer classifier structure.
  • Most of the background windows are first excluded by a simple classifier, and then the remaining windows are further judged by a complex classifier, thereby achieving a faster detection speed.
  • this method does not take into account the fact that the error probability of the classification error between face and non-face is very unbalanced in the actual image (this is because the prior probability of the face appearing in the image is much lower than that of the non-face) Probability is detected, and the main purpose of face detection is to find the position of the face, so the probability that the face is misclassified into a non-face is much larger than that of the non-face, and only the method based on the minimum classification error rate is used.
  • each layer classifier to adjust the threshold of the classifier to achieve a lower False Rejection Rate (FRR) for the face, which does not simultaneously achieve a low false acceptance rate for the non-face mode ( FalseAcceptance Rate, FAR); thus causing too many classifier layers, too complicated structure, slow detection speed, and degrading the overall performance of the algorithm.
  • FAR False Rejection Rate
  • the present invention proposes a face detection method based on the Cost Sensitive AdaBoost (CS-AdaBoost) algorithm, which minimizes the classification probability and classifies each layer of training.
  • CS-AdaBoost Cost Sensitive AdaBoost
  • the object of the present invention is to realize a face detector capable of robustly locating a face under a complex background image, the face speculation
  • the method includes two stages of training and testing;
  • the image acquisition device is used to collect the face and non-face samples, and the sample is normalized in size and illumination; then the training samples are used to extract the microstructure features to establish a microstructure feature database; Then use the microstructure feature database combined with CS-AdaBoost algorithm to train a layer of face/non-face strong classifier; repeat the above training process to obtain a multi-layer classifier with simple to complex structure; finally cascade these classifiers Get up, get a complete face detector;
  • the input image is first continuously scaled according to a certain ratio to obtain a series of images, and each of the small windows is discriminated by the above-mentioned multi-layer classifier in the obtained series image, and for each small window, the gray-scale normalization is first performed. Processing, and then extracting its microstructure features, using the trained face detector to judge the small window, if any layer classifier output below the specified threshold, the small window is considered to be non-human face without subsequent judgment, Only those small windows that are judged by all layer classifiers are considered to be a human face.
  • the training results in a face/non-face strong classifier, which is composed of microstructure features, whose false rejection rate is lower than 10e - 3 and the false acceptance rate is lower than 10e-6.
  • the training phase in turn contains the following steps:
  • Step 1 Sample collection, using any type of camera, digital camera, scanner
  • the image is collected, and the face is manually cut and cut out to create a face training sample database; the non-face training image is randomly cut out from the landscape picture without the face; the obtained personal face sample and non-human face The sample is used as a training sample set; Step 2. Normalization, including linear normalization of sample illumination and size:
  • the point 0' in the output image lattice, ;) corresponds to the point in the input image
  • the value of y/r y is generally not an integer. Therefore, it is necessary to estimate the value of ⁇ , ⁇ ) according to the value at a known discrete point nearby, according to the line.
  • the interpolation process can be expressed as:
  • Grayscale normalization Due to factors such as external illumination and imaging equipment, the brightness or contrast of the image may be abnormal, and strong shadows or reflections may occur. Therefore, it is necessary to perform grayscale equalization processing on the geometrically normalized samples to improve The gray-scale distribution and the consistency between the enhanced modes; the present invention uses gray-scale mean and variance normalization to perform gray-scale equalization processing on the sample, and adjusts the mean value and variance of the grayscale of the sample image to a given value ⁇ 0 and Cr Q : First, calculate the mean and variance of the sample image G(x, (0 ⁇ x ⁇ W, 0 ⁇ y ⁇ H) using the following formula-
  • the gray value of each pixel is transformed as follows : Thereby adjusting the mean and variance of the gray level of the image to a given value ⁇ and completing the gray level normalization of the sample;
  • the integral map is calculated to quickly extract the microstructure features, which in turn contain the following steps:
  • microstructure templates are used to extract five kinds of microstructure features of the face sample, and each of the microstructure features is obtained by calculating the difference between the gray levels of the pixels in the corresponding image in the black region and the white region of the template.
  • the five types of microstructures The signs (y, w, A) are expressed as follows:
  • Class (a) The black area and the white area are bilaterally symmetrical and the area is equal. w is used to indicate the width of each area, and h is the height of each area:
  • g(x,y,w,h) 2II(x + w- , y + h-l) + II(x-l,y- ⁇ )-II(x + w- ⁇ ,y-l)
  • g(x,y,w,h) 2Il(x + 2w- ⁇ ,y + h-l) + 2II(x + w-l,y-l)-2II(x + 2w- ⁇ ,y-l)
  • Class (d) Two black areas are in the first quadrant and the third quadrant, respectively, and two white areas are in the second and fourth quadrants, respectively, and each black area and each white area have the same area, w, h
  • the definition is the same as (a):
  • g(x, y, ⁇ ', h) -II(x - y- ⁇ ) ⁇ II (x + 2w - 1, ⁇ - 1) - II (x -l,y + 2h-l)
  • Class (e) The black area is located in the center of the white area, and the upper and lower sides of the black area, the left and right sides are respectively separated from the upper and lower sides of the white area, and the left and right sides are 2 pixels, w and h respectively represent white.
  • g(x,y,w,h) II(x + w-l,y + h-l) + II(x-l,y-T)-II(x + w-l,y-l)-II(x- ⁇ ,y + h-Y)
  • Step 4 Feature selection and classifier design
  • Each of the face/non-face strong classifiers is trained with the above training sample set training and CS-AdaBoost algorithm, and the multi-layer strong classifiers are cascaded to form a complete face detector. Includes the following steps:
  • Initialization 1;
  • the training target for defining each level of strong classifier is on the face training set?? ⁇ 0.02%, on the non-face training set ⁇ 2? ⁇ 60%; Define the entire face detector at The target on the face training set?? ⁇ 0.5%, the target F i? ⁇ 3.2xlO- 6 on the non-face training set;
  • FAR number of non-face samples discriminated as faces ⁇ total number of non-face samples X 100%
  • FRR number of face samples judged as non-face ⁇ total number of face samples X 100%
  • a complete face detector can be trained by the above steps;
  • the detecting phase refers to determining whether a face is included in an input image, and includes the following steps:
  • the input image is captured, that is, the image is captured by any device such as a camera, a digital camera, or a scanner;
  • the microstructure features of the small window are quickly extracted, and the feature normalization processing is performed; the trained multi-layer face/non-face strong classifier is used to judge the small window; if all layers are strong The judgment of the classifier considers that the small window contains a face and outputs its position; otherwise, the small window is discarded and no subsequent processing is performed; all the faces in an input image can be detected by the above steps.
  • the step 4 (2) training the i-th layer strong classifier includes the following steps:
  • Classification probability C is the misclassification probability multiple of the face category is the non-face category, c value should be greater than 1 and with the strong score c + l
  • the increase in the number of classifiers is gradually reduced to be close to 1; the initialization of training sample weights; the initial weight of each sample is
  • sub is a 20 ⁇ 20 pixel sample, and (sub) is the jth feature extracted from the sample; Is a decision threshold based on the j-th feature, which is obtained by counting the j-th features of all collected face and non-face samples such that the FRR of the face sample satisfies the specified requirement;
  • n/ ⁇ 0,1 is the category label of the sample image ⁇ b,., corresponding to the non-face category and the human face, respectively, where the face sample n fice , the non-face sample.
  • Order argmin ⁇ ., and its corresponding weak Classifier as h t ;
  • face authentication is the most friendly authentication method in the biometric ffi technology, which aims to use computer images to perform automatic personal identity authentication instead of traditional passwords, certificates, seals, etc.
  • the identity authentication method has the advantages of being difficult to forge, not lost, and convenient.
  • the BANCA database consists of 6,540 images with complex backgrounds and illumination, each containing a frontal upright face with a large change in the pitch of the face.
  • the correct detection rate of the present invention is 98.8%, and the correct detection rate of FacelT is 94.9%; in a test conducted by a third party-China Aerospace Information Corporation on a collection of images in which each image contained in a face contains a face, The detection accuracy of this algorithm is 98.6%, and the detection accuracy of FacelT is 98.0%.
  • Figure 1 Hardware of a typical face detection system
  • Figure 2 The acquisition process of the training sample
  • Figure 4 The composition of the face detection system
  • Figure 5 Five kinds of microstructure feature templates
  • Figure 8 The training process of the strong classifier
  • Figure 9 shows an example of the actual detection process of a face in an image
  • Figure 10 is a face recognition sign-in system based on the algorithm. detailed description
  • the face detector When implementing a face detection system, the face detector should be trained by collecting enough samples, and then the face detector can be used to detect any input image.
  • the hardware structure of the whole system is shown in Fig. 1.
  • 101 is a scanner
  • 102 is a camera
  • 103 is a computer.
  • the training process and detection process of the system are shown in Figure 4. The following sections describe the various parts of the system in detail:
  • sample normalization including linear normalization of sample illumination and size -
  • microstructure templates are used to extract five kinds of microstructure features of the face sample, and each of the microstructure features is obtained by calculating the difference between the gray levels of the pixels in the corresponding image in the black region and the white region of the template.
  • the five types of microstructure features x, y, w, h) are respectively represented as follows:
  • Class (a) The black area and the white area are bilaterally symmetrical and the area is equal. w is used to indicate the width of each area, and h is the height of each area:
  • g(x,y,w,h) 2II(x + w-l,y + h-l) + II(x-i,y-Y)-II(x + w-l,y ⁇ l)
  • Class (d) Two black areas are in the first quadrant and the third quadrant, respectively, and two white areas are in the second and third quadrants, respectively, and each black area and each white area have the same area, w, h
  • Class (e) The black area is located in the center of the white area, and the upper and lower sides of the color area, the left and right sides are respectively separated from the upper and lower sides of the white area, and the left and right sides are 2 pixels, w and h respectively represent white.
  • the width and ⁇ -g(x,y,w,h) of the region II(x + wl,y + hY ⁇ + II(xl,yl)-II(x + wl,yY)-II(x-li, y + hl)
  • the total number of parameters C A is 92267, which can extract the feature quantity of the sample image ⁇ /), l ⁇ j ⁇ 92267;
  • Each of the face/non-face strong classifiers is trained with the above training sample set training and CS-AdaBoost algorithm, and the multi-layer strong classifiers are cascaded to form a complete face detector. Includes classifier design and feature selection:
  • a face detector In order to achieve fast enough detection speed, a face detector must be layered (as shown in Figure 7), consisting of a cascade of simple to complex strong classifiers. First, the background window in the image is excluded by a strong classifier with simple structure, and then the remaining window is judged by a strong classifier with complex structure (the strong classifier here refers to a classifier that achieves high performance on the training set; The weak classifier below refers to a classifier with an error rate slightly lower than 0.5 on the training set).
  • the present invention trains each layer of strong classifiers using the CS-AdaBoost algorithm.
  • the CS-AdaBoost algorithm is a weak classifier integration algorithm, which can combine weak classifiers into strong classifiers on the training set; and the CS-AdaBoost algorithm treats the risks caused by two types of classification errors differently, so that the training set is The overall classification error risk is minimized.
  • the strong classifier obtained by the training is based on a sufficiently low classification error (FRR) on the face type to minimize the non-face class.
  • FAR Classification error
  • the weak classifier in the present invention is a tree classifier constructed using one-dimensional features:
  • sub is a sample of 20 X 20 pixels
  • . (sub) represents the jth feature extracted from the sample, which is the decision threshold corresponding to the jth feature (the threshold is calculated by counting all collected faces and non-human faces)
  • the j-th feature of the sample is such that the FRR of the face sample satisfies the specified requirements)
  • hj (sub) represents the decision output of the tree classifier constructed using the j-th feature. In this way, each weak classifier only needs to compare the threshold once to complete the decision; a total of 92267 weak classifiers are obtained.
  • the CS-AdaBoost algorithm is combined with the above weak classifier construction method for training a face/non-face strong classifier.
  • the training steps are as follows (record training sample set o
  • the misclassification probability of face samples C(/) ⁇ (c is the misclassification probability multiplier of the face category is non-face category, ⁇ value should be greater than 1 c +l
  • T is the number of weak classifiers that you want to use
  • T should increase gradually with the increase of the number of strong classifiers.
  • Table 1 The specific selection values are shown in Table 1;
  • FAR number of non-face samples judged as faces ⁇ total number of non-face samples X 100%
  • FAR number of non-face samples judged as faces ⁇ total number of non-face samples X 100%
  • the final trained face detector consisted of a 19-layer strong classifier, using a total of 3139 weak classifiers.
  • the entire detector on the face validation set ⁇ ? Is about 0.15%, in the non-face training set 3 ⁇ 4 about 3.2x10- 6.
  • Table 1 gives the training results of several of the classifiers.
  • the window is considered to contain a face.
  • the detecting phase refers to determining whether a face is included in an input image, as shown in FIG. 9, and includes the following steps -
  • the input image is captured, that is, the image is captured by any device such as a camera, a digital camera, or a scanner;
  • [//,.(3 ⁇ 4 +19, ⁇ 0 +19)+7/ ⁇ 0 -1, ⁇ -!-//,.(3 ⁇ 4 -l, o +19)-//, ( 0 + 19,j 0 -l)]/400
  • ⁇ [ ⁇ , ( 0 +19, y 0 + l9) + SqrII i (x Q -l, y 0 -] -Sqrll ⁇ -l, y 0 +19)
  • the microstructure of the small window is quickly extracted, and the feature normalization is performed; the trained multi-layer face/non-face strong classifier is used to judge the small window; if all layers are passed
  • the judgment of the strong classifier is that the small window contains a face and outputs its position; otherwise, the small window is discarded and no subsequent processing is performed; all the faces in an input image can be detected by the above steps.
  • Example 1 Face-based identification check-in system (Fig. 10)
  • the CMU test set contains a total of 130 images with complex backgrounds and 507 faces.
  • the image was scaled by a maximum of 13 times in a ratio of 1.25, and a total of 71,040,758 image windows were judged.
  • the comparison results are shown in Table 2. It can be seen that the overall performance of the algorithm is better than that of Viola [Viola P, Jones M. Rapid object detection using a boosted cascade of simple features. Proc on Computer Vision Pattern Recognition, 2001], Schnei derman [Schnei derman H , Kanade T.
  • the BANCA database consists of 6,540 images with complex backgrounds and illumination, each containing a frontal upright face with a large change in the pitch of the face.
  • the correct detection rate of the present invention is 98.8%, and the correct detection rate of FacelT is 94.9%; in a test conducted by a third party-China Aerospace Information Corporation on a collection of images in which each image contained in a face contains a face, The detection accuracy of this algorithm is 98.6%, and the detection accuracy of FacelT is 98.0%.
  • Face authentication is one of the most friendly authentication methods in biometric authentication technology that has received extensive attention recently.
  • the system uses face information to automatically verify the identity of a person.
  • the face detection module used therein is the research result of this paper.
  • this system also participated in the contest ICPR2004 FAT2004 organization.
  • the competition included 13 face recognition algorithms from 11 academic and commercial institutions including Carnegie Mellon University in the United States, Neuroinformatik Institute in Germany, and Surrey University in the United Kingdom.
  • the system submitted by the laboratory won the first place in the three evaluation indicators with a lower error rate of about 50% than the second.
  • the research results of this paper are applied in the face detection module of the system submitted by this experiment, thus ensuring that the overall performance of the system is at the advanced level in the world.
  • the present invention can robustly detect a human face in an image with a complex background, and obtains excellent detection results in experiments, and has a very broad application prospect.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé de détection robuste de visage humain dans une image d'arrière plan compliquée. Ce procédé adopte des caractéristiques de microstructures à haut rendement et à redondance élevée pour exprimer le caractère distribué par échelonnement de l'oeil, du nez, etc. du type de visage humain et adopte l'algorithme AdaBoost sensible à l'évolution des coûts pour choisir les caractéristiques de microstructures du visage humain et du visage non humain afin de former un classifieur solide de sorte que chaque classifier de couche puisse réduire de faux taux d'acceptance des échantillons de visages non humains dans le cas d'une garantie de taux de reconnaissance à rejet extrêmement faible, ce qui permet de réaliser la détection de visage humain avec une meilleure performance dans une image d'arrière plan compliquée avec des structures plus simples. En outre, des algorithmes de post-traitement sont utilisés pour réduire également le taux des mauvaises détections.
PCT/CN2007/001893 2007-06-15 2007-06-15 Procédé de détection robuste de visage humain dans une image d'arrière-plan compliquée WO2008151470A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2007/001893 WO2008151470A1 (fr) 2007-06-15 2007-06-15 Procédé de détection robuste de visage humain dans une image d'arrière-plan compliquée

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2007/001893 WO2008151470A1 (fr) 2007-06-15 2007-06-15 Procédé de détection robuste de visage humain dans une image d'arrière-plan compliquée

Publications (1)

Publication Number Publication Date
WO2008151470A1 true WO2008151470A1 (fr) 2008-12-18

Family

ID=40129205

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2007/001893 WO2008151470A1 (fr) 2007-06-15 2007-06-15 Procédé de détection robuste de visage humain dans une image d'arrière-plan compliquée

Country Status (1)

Country Link
WO (1) WO2008151470A1 (fr)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894262A (zh) * 2009-05-20 2010-11-24 索尼株式会社 对图像进行分类的方法和设备
CN109118424A (zh) * 2018-09-26 2019-01-01 旺微科技(上海)有限公司 一种目标检测的图像处理内存管理方法及管理***
CN109784244A (zh) * 2018-12-29 2019-05-21 西安理工大学 一种指定目标的低分辨率人脸精确识别方法
CN109948433A (zh) * 2019-01-31 2019-06-28 浙江师范大学 一种嵌入式人脸跟踪方法及装置
CN110276257A (zh) * 2019-05-20 2019-09-24 阿里巴巴集团控股有限公司 人脸识别方法、装置、***、服务器及可读存储介质
CN110348413A (zh) * 2019-07-17 2019-10-18 上海思泷智能科技有限公司 一种便携离线人脸识别取证单兵***
CN110956080A (zh) * 2019-10-14 2020-04-03 北京海益同展信息科技有限公司 一种图像处理方法、装置、电子设备及存储介质
CN111222380A (zh) * 2018-11-27 2020-06-02 杭州海康威视数字技术股份有限公司 一种活体检测方法、装置、及其识别模型训练方法
CN111814553A (zh) * 2020-06-08 2020-10-23 浙江大华技术股份有限公司 人脸检测方法、模型的训练方法及其相关装置
CN111881715A (zh) * 2020-06-03 2020-11-03 西安电子科技大学 一种人脸检测硬件加速方法、***和设备
CN112882057A (zh) * 2021-01-19 2021-06-01 中国科学院西安光学精密机械研究所 一种基于插值的光子计数非视域三维成像超分辨方法
CN116363736A (zh) * 2023-05-31 2023-06-30 山东农业工程学院 基于数字化的大数据用户信息采集方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005008210A2 (fr) * 2003-05-14 2005-01-27 Polcha Michael P Systeme et procede permettant d'effectuer un controle d'acces securise a partir de donnees biometriques modifiees
CN1731417A (zh) * 2005-08-19 2006-02-08 清华大学 复杂背景图像中的鲁棒人脸检测方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005008210A2 (fr) * 2003-05-14 2005-01-27 Polcha Michael P Systeme et procede permettant d'effectuer un controle d'acces securise a partir de donnees biometriques modifiees
CN1731417A (zh) * 2005-08-19 2006-02-08 清华大学 复杂背景图像中的鲁棒人脸检测方法

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894262A (zh) * 2009-05-20 2010-11-24 索尼株式会社 对图像进行分类的方法和设备
WO2010133161A1 (fr) 2009-05-20 2010-11-25 索尼公司 Procédé et dispositif de classification d'image
CN101894262B (zh) * 2009-05-20 2014-07-09 索尼株式会社 对图像进行分类的方法和设备
CN109118424A (zh) * 2018-09-26 2019-01-01 旺微科技(上海)有限公司 一种目标检测的图像处理内存管理方法及管理***
CN111222380A (zh) * 2018-11-27 2020-06-02 杭州海康威视数字技术股份有限公司 一种活体检测方法、装置、及其识别模型训练方法
CN111222380B (zh) * 2018-11-27 2023-11-03 杭州海康威视数字技术股份有限公司 一种活体检测方法、装置、及其识别模型训练方法
CN109784244A (zh) * 2018-12-29 2019-05-21 西安理工大学 一种指定目标的低分辨率人脸精确识别方法
CN109784244B (zh) * 2018-12-29 2022-11-25 西安理工大学 一种指定目标的低分辨率人脸精确识别方法
CN109948433A (zh) * 2019-01-31 2019-06-28 浙江师范大学 一种嵌入式人脸跟踪方法及装置
CN110276257A (zh) * 2019-05-20 2019-09-24 阿里巴巴集团控股有限公司 人脸识别方法、装置、***、服务器及可读存储介质
CN110348413A (zh) * 2019-07-17 2019-10-18 上海思泷智能科技有限公司 一种便携离线人脸识别取证单兵***
CN110956080A (zh) * 2019-10-14 2020-04-03 北京海益同展信息科技有限公司 一种图像处理方法、装置、电子设备及存储介质
CN110956080B (zh) * 2019-10-14 2023-11-03 京东科技信息技术有限公司 一种图像处理方法、装置、电子设备及存储介质
CN111881715B (zh) * 2020-06-03 2023-07-28 西安电子科技大学 一种人脸检测硬件加速方法、***和设备
CN111881715A (zh) * 2020-06-03 2020-11-03 西安电子科技大学 一种人脸检测硬件加速方法、***和设备
CN111814553A (zh) * 2020-06-08 2020-10-23 浙江大华技术股份有限公司 人脸检测方法、模型的训练方法及其相关装置
CN111814553B (zh) * 2020-06-08 2023-07-11 浙江大华技术股份有限公司 人脸检测方法、模型的训练方法及其相关装置
CN112882057A (zh) * 2021-01-19 2021-06-01 中国科学院西安光学精密机械研究所 一种基于插值的光子计数非视域三维成像超分辨方法
CN112882057B (zh) * 2021-01-19 2023-12-08 中国科学院西安光学精密机械研究所 一种基于插值的光子计数非视域三维成像超分辨方法
CN116363736B (zh) * 2023-05-31 2023-08-18 山东农业工程学院 基于数字化的大数据用户信息采集方法
CN116363736A (zh) * 2023-05-31 2023-06-30 山东农业工程学院 基于数字化的大数据用户信息采集方法

Similar Documents

Publication Publication Date Title
WO2008151470A1 (fr) Procédé de détection robuste de visage humain dans une image d'arrière-plan compliquée
CN111401257B (zh) 一种基于余弦损失在非约束条件下的人脸识别方法
John et al. Pedestrian detection in thermal images using adaptive fuzzy C-means clustering and convolutional neural networks
CN101630363B (zh) 复杂背景下彩色图像人脸的快速检测方法
US6671391B1 (en) Pose-adaptive face detection system and process
Abdullah et al. Optimizing face recognition using PCA
WO2008151471A1 (fr) Procédé de positionnementrobuste et précis de l'oeil dans une image d'arrière-plan compliquée
CN104504362A (zh) 基于卷积神经网络的人脸检测方法
CN103400122A (zh) 一种活体人脸的快速识别方法
Anila et al. Simple and fast face detection system based on edges
CN102682287A (zh) 基于显著度信息的行人检测方法
CN111539351B (zh) 一种多任务级联的人脸选帧比对方法
Mady et al. Efficient real time attendance system based on face detection case study “MEDIU staff”
CN106339665A (zh) 一种人脸的快速检测方法
Desai et al. Real-time implementation of Indian license plate recognition system
Pandey et al. An optimistic approach for implementing viola jones face detection algorithm in database system and in real time
Anantharajah et al. Quality based frame selection for video face recognition
Xu et al. A novel multi-view face detection method based on improved real adaboost algorithm
Abusham Face verification using local graph stucture (LGS)
Paliy Face detection using Haar-like features cascade and convolutional neural network
Rajalakshmi et al. A review on classifiers used in face recognition methods under pose and illumination variation
Navabifar et al. A Fusion Approach Based on HOG and Adaboost Algorithm for Face Detection under Low-Resolution Images.
Cruz et al. Multiple Face Recognition Surveillance System with Real-Time Alert Notification using 3D Recognition Pattern
Wang et al. Research on face detection based on fast Haar feature
Bukis et al. Survey of face detection and recognition methods

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07721466

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07721466

Country of ref document: EP

Kind code of ref document: A1