CN109543518A - A kind of human face precise recognition method based on integral projection - Google Patents
A kind of human face precise recognition method based on integral projection Download PDFInfo
- Publication number
- CN109543518A CN109543518A CN201811203735.9A CN201811203735A CN109543518A CN 109543518 A CN109543518 A CN 109543518A CN 201811203735 A CN201811203735 A CN 201811203735A CN 109543518 A CN109543518 A CN 109543518A
- Authority
- CN
- China
- Prior art keywords
- picture
- face
- skin
- human face
- transformation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 210000001508 eye Anatomy 0.000 claims abstract description 22
- 230000009466 transformation Effects 0.000 claims abstract description 22
- 238000001914 filtration Methods 0.000 claims abstract description 13
- 238000000605 extraction Methods 0.000 claims abstract description 10
- 210000000056 organ Anatomy 0.000 claims abstract description 9
- 230000008569 process Effects 0.000 claims abstract description 8
- 230000009467 reduction Effects 0.000 claims abstract description 8
- 230000004069 differentiation Effects 0.000 claims abstract description 4
- 230000002708 enhancing effect Effects 0.000 claims description 4
- 238000004422 calculation algorithm Methods 0.000 abstract description 2
- 239000000284 extract Substances 0.000 abstract description 2
- 230000011218 segmentation Effects 0.000 abstract description 2
- 230000001815 facial effect Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 210000005252 bulbus oculi Anatomy 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010835 comparative analysis Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000012821 model calculation Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The human face precise recognition method based on integral projection that the invention discloses a kind of, comprising the following steps: (1) picture pre-processes, including greyscale transformation, filtering and noise reduction and small echo enhance three kinds of methods;(2) skin cluster and differentiation screen skin distribution region using Y Cb Cr color space;(3) locating human face position, using pixel transversely be longitudinally superimposed obtained drop shadow curve, orient area of skin color;(4) it is based on eyes, mouth and the respective feature of nose, carries out the centralized positioning of each organ;(5) region division is carried out based on central point;(6) contour feature extraction is carried out to each organ after carrying out binarization of gray value to picture.The present invention determines the position of face by the method for skin color segmentation and integral projection, and on this basis, the face characteristic based on template matching extracts model and gray-level projection algorithm carries out identification judgement to eyes, nose, lip shape etc..
Description
Technical field
The present invention relates to face identification methods, and in particular to a kind of human face side of accurately identifying based on integral projection
Method.
Background technique
Face identification system mainly includes four component parts, and it is pre- to be respectively as follows: man face image acquiring and detection, facial image
Processing, facial image feature extraction and matching and identification.
Different facial images can be transferred through pick-up lens and collect, such as still image, dynamic image, different positions
It sets, different expression etc. can be acquired well.When user is in the coverage for acquiring equipment, equipment is acquired
It can search for automatically and shoot the facial image of user.
Image preprocessing for face is based on Face datection as a result, carrying out processing to image and finally serving feature
The process of extraction.The original image that system obtains tends not to directly make due to being limited by various conditions and random disturbances
With, it is necessary to the image preprocessings such as gray correction, noise filtering are carried out to it in the early stage of image procossing.It mainly include face
Light compensation, greyscale transformation, histogram equalization, normalization, geometric correction, filtering and sharpening of image etc..
Feature workable for face identification system is generally divided into visual signature, pixels statistics feature, facial image transformation series
Number feature, facial image algebraic characteristic etc..Face characteristic extraction is carried out aiming at certain features of face.Face characteristic mentions
It takes, also referred to as face characterizes, it is the process that feature modeling is carried out to face.
The feature templates stored in the characteristic of the facial image of extraction and database scan for matching, and pass through setting
One threshold value, when similarity is more than this threshold value, then result matching obtained exports.Recognition of face is exactly by people to be identified
Face feature is compared with obtained skin detection, is judged according to identity information of the similarity degree to face.
With face recognition technology in artificial intelligence and daily life using more and more extensive, in terms of recognition of face
Requirement it is also higher and higher, not only simply recognition of face is come out, more needs to identify the organ on face, goes forward side by side
How row comparative analysis, such as the identification of eye, nose, mouth, therefore, research accurately and efficiently identify that face is significant.
Summary of the invention
Purpose of the invention is to overcome the shortcomings in the prior art, provides a kind of human face based on integral projection
Precise recognition method.
The purpose of the present invention is what is be achieved through the following technical solutions:
A kind of human face precise recognition method based on integral projection, comprising the following steps:
(1) picture pre-processes, including greyscale transformation, filtering and noise reduction and small echo enhance three kinds of methods;
(2) skin cluster and differentiation screen skin distribution region using Y Cb Cr color space;
(3) locating human face position, using pixel transversely be longitudinally superimposed obtained drop shadow curve, orient colour of skin area
Domain;
(4) it is based on eyes, mouth and the respective feature of nose, carries out the centralized positioning of each organ;
(5) region division is carried out based on central point;
(6) contour feature extraction is carried out to each organ after carrying out binarization of gray value to picture.
Further, a kind of human face precise recognition method based on integral projection according to claim 1, it is special
Sign is that greyscale transformation is lower than the picture of 120:1 for contrast in step (1), first extends the gray scale interval of original picture
To [0,255], i.e., enhance the contrast of image by greyscale transformation, it is readily discernible, it may be assumed thatIn formula
The brightness of F- original picture;FMAXThe most bright brightness of original picture;FMINThe most dark brightness of original picture;The brightness of picture after F'- transformation;
Filtering and noise reduction is to inhibit noise jamming using median filtering method and retain boundary, does not influence face Boundary Recognition as a result, by each
The gray value of pixel is set as the intermediate value of all picture element gray values in the vertex neighborhood;Small echo enhancing is small to picture progress
Wave conversion decomposes picture size, position, direction, eventually by wavelet inverse transformation also original picture.
Compared with prior art, the beneficial effects brought by the technical solution of the present invention are as follows:
1. the present invention proposes that a kind of human face based on integral projection accurately identifies mode, pass through skin color segmentation and integral
The method of projection determines the position of face, and on this basis, the face characteristic based on template matching extracts model and gray scale product
Projection algorithm is divided to carry out identification judgement to eyes, nose, lip shape etc..
2. the present invention accurately identifies mode by human face, identify human face and carry out feature extraction, is artificial intelligence
The technology development of energy industry and industrialization process play a degree of facilitation.
Detailed description of the invention
Fig. 1 is human eye feature matrix schematic diagram.
Fig. 2-1 and Fig. 2-2 is the feature extraction schematic diagram of eyes and mouth respectively.
Specific embodiment
The invention will be further described with reference to the accompanying drawing.
In the process of face recognition, one is divided into two big steps to the present invention, and the first step carries out judgement confirmation to face location,
Original image is pre-processed using greyscale transformation, filtering and noise reduction, small echo enhancing first, secondly the colour of skin is differentiated and mentioned
It takes, carries out Face detection finally by gray-level projection.It is accurate that second step carries out organ on the basis of face location confirms
Positioning, the eigenmatrix for combining footprint characteristic to establish eye recognition first are used to position eyes, secondly be constructed with same method
The eigenmatrix of lip is to identify lip, finally with the vertical range g and its line and mouth of nostril line and eyes line
The ratio of the vertical range h of central point portrays nose feature.It is specific as follows:
1. picture pre-processes, mainly enhance three kinds of methods using greyscale transformation, filtering and noise reduction and small echo.
Greyscale transformation: for less clear, the lower picture of contrast, it is necessary first to by gray area relatively narrow in original picture
Between expand to [0,255], that is, pass through greyscale transformation enhance image contrast, it is easier to recognize, it may be assumed thatThe brightness of F- original picture in formula;FMAXThe most bright brightness of original picture;FMINMost dark bright of original picture
Degree;The brightness of picture after F'- transformation.
Filtering and noise reduction: can inhibit noise jamming using median filtering method and retain boundary, not influence face Boundary Recognition knot
Fruit sets the gray value of each pixel to the intermediate value of all picture element gray values in the vertex neighborhood.
Small echo enhancing: carrying out wavelet transformation firstly the need of to picture, decompose to its size, position, direction, final logical
Cross wavelet inverse transformation also original picture.
2. skin cluster and differentiation
Y Cb Cr color space is influenced smaller by brightness change, can preferably be limited skin distribution region, be utilized this
Feature can screen area of skin color.By establishing rgb space to the Linear Mapping of Y Cb Cr color space, color is believed
Breath is converted into luminance information (Y) and colour information (Cb, Cr), and component Cb and Cr are blue and red component and reference value respectively
Difference.The RGB information of several skin pixel points of face is acquired, and maps that Y Cb Cr color space using Linear Mapping,
Fuzzy diagnosis is carried out to colour of skin point, with reference to picture after simple Gauss model calculation processing in the corresponding Cb of [Cb, Cr] two-dimensional surface,
Cr value calculates covariance matrix, is standardized to obtain y to pixeli, and then its colour of skin likelihood score can be found outThe more bigger whiter closer colour of skin of its value.
3. locating human face position
For processing after pixel value transversely (x-axis) and longitudinally (y-axis) both direction add up, this makes it possible to obtain
The drop shadow curve of the image, if the picture has m × n pixel, projection formula is as follows:
In formula: F (xi,yjThe pixel that certain in)-picture is put;The abscissa of S (x)-picture projection curve;S (y)-picture projection
The ordinate of curve.
After being identified using boundary of the above method to face, trough represents at the point " colour of skin point " more than in its neighborhood
" non-colour of skin point ", have fluctuation by a small margin other than human face region, this is mainly due to identification when hand be also determined in order to
" colour of skin point ", but since the area of hand is smaller for face, an obvious trough, thus this method will not be brought
With certain universality.In conjunction with derivative, that is, slope and then judge whether there is " jumping " from non-colour of skin area to colour of skin area.
4. carrying out the centralized positioning of each organ
Centralized positioning is carried out to eyes first, since hair is smaller with the color difference of eyeball and hair area is compared with eyes
Greatly, the method detection effect of integral projection and bad is utilized therefore when judging eyes.Therefore, feature square is established in conjunction with footprint characteristic
Shape.Due to eyeball subcircular, the characteristic rectangle of eyeball has been initially set up as shown in Figure 1, specifying the size of the frame in human face region
Swept, define Harr-like characteristic value, that is, black region pixel value and subtract white area pixel value and.Its feature
Value absolute value is bigger to illustrate that region is more agreed with the model, to realize for oculocentric judgement.
Secondly centralized positioning is carried out to mouth, since mouth color differs greatly with the colour of skin, and it is up and down the colour of skin, no
By the interference of other factors, so when can directly be detected using length and width of the method for integral projection for mouth, take it
Central point of the corresponding point in length and width midpoint as mouth.
Notice that nose is bound in the rectangular area surrounded in eye, mouth, and in front portion it has been determined that human eye,
The central point of mouth can be obtained one piece of only nosed region based on this, be directly separated out one containing only nosed region, therefore not
The central point of nose need to be defined.
5. carrying out region division based on central point
Region partitioning method (mouth can similarly obtain) is described in detail by taking eyes as an example, since the length d at two canthus of people is approximate
Equal to eye length, the line section of two central points is l, centered on two central points, withFor neighborhood, preliminary judgement one
The range of a x-axis is identified by range of the integral projection curve to y-axis direction, using the thought of iteration for the neighborhood
It is modified.
6. carrying out contour feature extraction
It notices after carrying out binarization of gray value processing to picture, lower edge feature is unobvious, therefore mainly to eyes
Top edge feature is analyzed.It because of the nearly elliptical shape of eyes, therefore is approached using elliptic curve, defines several point of contacts
A, the value of b, c are searched for as shown in Fig. 2-1 in position, and find out its pixel value in the region that the elliptic curve and rectangle frame surround
It is cumulative, smaller then parametric approximation effect is preferable, that is, takes maximum [a, b] value.
For eyes, the contour feature of lip is become apparent from, and there is no the interference that other positions identify it, therefore
The method that integral projection can be directly utilized observes its Wave crest and wave trough and determines several characteristic distances as shown in Fig. 2-2.
Since there is largely lose at edge during gray value binaryzation for nose.Only with nostril line with
The vertical range g of eyes line portrays nose feature with the ratio of its line and the vertical range h of mouth central point.
The present invention is suitable for the practical application of current all kinds of recognitions of face, as the face of the personal identification of cell, company is known
Other attendance recorder etc..The feature extracted after recognition of face and database are compared judge the people whether be this cell or
Employee whether absence from duty etc., for the personnel's supervisory system reinforced residential security management He improve company.
The present invention is not limited to embodiments described above.Above the description of specific embodiment is intended to describe and say
Bright technical solution of the present invention, the above mentioned embodiment is only schematical, is not restrictive.This is not being departed from
In the case of invention objective and scope of the claimed protection, those skilled in the art may be used also under the inspiration of the present invention
The specific transformation of many forms is made, within these are all belonged to the scope of protection of the present invention.
Claims (2)
1. a kind of human face precise recognition method based on integral projection, which comprises the following steps:
(1) picture pre-processes, including greyscale transformation, filtering and noise reduction and small echo enhance three kinds of methods;
(2) skin cluster and differentiation screen skin distribution region using Y Cb Cr color space;
(3) locating human face position, using pixel transversely be longitudinally superimposed obtained drop shadow curve, orient area of skin color;
(4) it is based on eyes, mouth and the respective feature of nose, carries out the centralized positioning of each organ;
(5) region division is carried out based on central point;
(6) contour feature extraction is carried out to each organ after carrying out binarization of gray value to picture.
2. a kind of human face precise recognition method based on integral projection according to claim 1, which is characterized in that step
(1) greyscale transformation is lower than the picture of 120:1 for contrast in, the gray scale interval of original picture is expanded to [0,255] first, i.e.,
Enhance the contrast of image by greyscale transformation, it is readily discernible, it may be assumed thatF- original picture is bright in formula
Degree;FMAXThe most bright brightness of original picture;FMINThe most dark brightness of original picture;The brightness of picture after F'- transformation;
Filtering and noise reduction is to inhibit noise jamming using median filtering method and retain boundary, does not influence face Boundary Recognition as a result, will
The gray value of each pixel is set as the intermediate value of all picture element gray values in the vertex neighborhood;
Small echo enhancing is to carry out wavelet transformation to picture, is decomposed to picture size, position, direction, inverse eventually by small echo
Transformation also original picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811203735.9A CN109543518A (en) | 2018-10-16 | 2018-10-16 | A kind of human face precise recognition method based on integral projection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811203735.9A CN109543518A (en) | 2018-10-16 | 2018-10-16 | A kind of human face precise recognition method based on integral projection |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109543518A true CN109543518A (en) | 2019-03-29 |
Family
ID=65843953
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811203735.9A Pending CN109543518A (en) | 2018-10-16 | 2018-10-16 | A kind of human face precise recognition method based on integral projection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109543518A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110175584A (en) * | 2019-05-30 | 2019-08-27 | 湖南城市学院 | A kind of facial feature extraction reconstructing method |
CN111814516A (en) * | 2019-04-11 | 2020-10-23 | 上海集森电器有限公司 | Driver fatigue detection method |
CN112329646A (en) * | 2020-11-06 | 2021-02-05 | 吉林大学 | Hand gesture motion direction identification method based on mass center coordinates of hand |
CN113408408A (en) * | 2021-06-17 | 2021-09-17 | 杭州嘉轩信息科技有限公司 | Sight tracking method combining skin color and iris characteristics |
CN114067393A (en) * | 2021-11-08 | 2022-02-18 | 上海科江电子信息技术有限公司 | Short video live broadcast host head portrait recognition method, system, equipment and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102184401A (en) * | 2011-04-29 | 2011-09-14 | 苏州两江科技有限公司 | Facial feature extraction method |
-
2018
- 2018-10-16 CN CN201811203735.9A patent/CN109543518A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102184401A (en) * | 2011-04-29 | 2011-09-14 | 苏州两江科技有限公司 | Facial feature extraction method |
Non-Patent Citations (1)
Title |
---|
廖艺捷,刘昊阳,崔文豪: ""基于积分投影改进的人脸识别模式"", 《信息与电脑》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111814516A (en) * | 2019-04-11 | 2020-10-23 | 上海集森电器有限公司 | Driver fatigue detection method |
CN110175584A (en) * | 2019-05-30 | 2019-08-27 | 湖南城市学院 | A kind of facial feature extraction reconstructing method |
CN112329646A (en) * | 2020-11-06 | 2021-02-05 | 吉林大学 | Hand gesture motion direction identification method based on mass center coordinates of hand |
CN113408408A (en) * | 2021-06-17 | 2021-09-17 | 杭州嘉轩信息科技有限公司 | Sight tracking method combining skin color and iris characteristics |
CN114067393A (en) * | 2021-11-08 | 2022-02-18 | 上海科江电子信息技术有限公司 | Short video live broadcast host head portrait recognition method, system, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109543518A (en) | A kind of human face precise recognition method based on integral projection | |
CN106682601B (en) | A kind of driver's violation call detection method based on multidimensional information Fusion Features | |
CN103400110B (en) | Abnormal face detecting method before ATM cash dispenser | |
Yu et al. | Vision-based hand gesture recognition using combinational features | |
CN106845328B (en) | A kind of Intelligent human-face recognition methods and system based on dual camera | |
CN105447503B (en) | Pedestrian detection method based on rarefaction representation LBP and HOG fusion | |
CN108932493A (en) | A kind of facial skin quality evaluation method | |
CN108921004A (en) | Safety cap wears recognition methods, electronic equipment, storage medium and system | |
CN101739546A (en) | Image cross reconstruction-based single-sample registered image face recognition method | |
CN112232332B (en) | Non-contact palm detection method based on video sequence | |
CN106203338B (en) | Human eye state method for quickly identifying based on net region segmentation and threshold adaptive | |
CN103218615B (en) | Face judgment method | |
Monwar et al. | Pain recognition using artificial neural network | |
CN110598574A (en) | Intelligent face monitoring and identifying method and system | |
CN107247934A (en) | A kind of round-the-clock yawn detection method and system based on swift nature point location | |
CN109948570B (en) | Real-time detection method for unmanned aerial vehicle in dynamic environment | |
CN104573743B (en) | A kind of facial image detection filter method | |
Yusuf et al. | Human face detection using skin color segmentation and watershed algorithm | |
CN109409347A (en) | A method of based on facial features localization fatigue driving | |
Campadelli et al. | Localization of facial features and fiducial points | |
CN110598521A (en) | Behavior and physiological state identification method based on intelligent analysis of face image | |
CN109766860A (en) | Method for detecting human face based on improved Adaboost algorithm | |
Elbalaoui et al. | Segmentation of optic disc from fundus images | |
Mandal et al. | Human visual system inspired object detection and recognition | |
CN109753912A (en) | A kind of multi-light spectrum palm print matching process based on tensor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190329 |