CN106991360A - Face identification method and face identification system - Google Patents
Face identification method and face identification system Download PDFInfo
- Publication number
- CN106991360A CN106991360A CN201610039995.1A CN201610039995A CN106991360A CN 106991360 A CN106991360 A CN 106991360A CN 201610039995 A CN201610039995 A CN 201610039995A CN 106991360 A CN106991360 A CN 106991360A
- Authority
- CN
- China
- Prior art keywords
- face
- central point
- pixel
- submodule
- profile
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/162—Detection; Localisation; Normalisation using pixel segmentation or colour matching
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of face identification method and face identification system, for detecting the occlusion area in face, including:Face datection is carried out to image, and detected face is marked;The face of mark is positioned, to obtain the central point of left eye and the central point of right eye respectively;Two summits on the upside of the central point of the left eye, the central point of the right eye and indicia framing are constituted into geometric areas;Calculate the chromatic component of each pixel in the geometric areas;Filter out the chromatic component and be not equal to the difference pixel of default colourity, and count the interval corresponding to the difference pixel to be used as occlusion area.The present invention is by counting the pixel different from the colour of skin in the geometric areas being made up of indicia framing and eyes, to be used as occlusion area, sample, training parameter need not be collected and do not disturbed by user behavior, with accuracy is high, calculating speed is fast and recognizes flexible advantage.
Description
Technical field
The invention belongs to the field of image procossing, more particularly to a kind of face identification method and face identification system.
Background technology
Recognition of face, is a kind of biological identification technology that the facial feature information based on people carries out identification.
Image or video flowing containing face generally using video camera or camera collection, and detect in the picture automatically
And track human faces.It can be widely applied to identification, In vivo detection, lip reading identification, intention camera, face
In the scenes such as beautification, social platform.
Wherein, the accuracy rate of recognition of face, the posture that can be taken pictures (front or side), light (daytime
Or night), the influence of many factors such as shelter (hair, glasses, beard).Wherein, influence accurate
The maximum factor of rate, is long vertical glabella, the hair at canthus.
In this regard, the common processing method of recognition of face, is that it is first adopted using the machine learning algorithm for having supervision
Collect substantial amounts of face sample, then training pattern, and then recognition of face is carried out to the image of input.It is such
Machine learning algorithm, not only wastes time and energy when collecting face sample, is related to during training pattern
Parameter is complicated.Especially since user may wear the cap of different patterns and may contaminate hair
Into different colours, above-mentioned parameter can be caused to need to carry out adaptation, and common machine learning algorithm,
It is difficult to should in real time be adjusted to above-mentioned user behavior.
The content of the invention
In view of this, can it is an object of the invention to provide a kind of face identification method and face identification system
To be wasted time and energy when solving and sample is collected in machine learning algorithm of the prior art, training process is related to parameter
It is complicated and be difficult to be adjusted with the change of user behavior, and then influence the accuracy and flexibly of recognition result
The technical problem of property.
In order to solve the above technical problems, the embodiments of the invention provide a kind of face identification method, for detecting
Occlusion area in face, including:
Face datection is carried out to image, and detected face is marked;
The face of mark is positioned, to obtain the central point of left eye and the central point of right eye respectively;
Two summits on the upside of the central point of the left eye, the central point of the right eye and indicia framing are constituted into geometry
Region;
Calculate the chromatic component of each pixel in the geometric areas;And
Filter out the chromatic component and be not equal to the difference pixel of default colourity, and count the difference pixel institute
Corresponding interval is to be used as occlusion area.
In order to solve the above technical problems, the embodiment of the present invention additionally provides a kind of face identification system, for examining
The occlusion area surveyed in face, including:
Detection module, for carrying out Face datection to image, and detected face is marked;
Locating module, is positioned for the face to mark, to obtain central point and the right side of left eye respectively
The central point of eye;
Selecting module, for by the upside of the central point of the left eye, the central point of the right eye and indicia framing
Two summits constitute geometric areas;
Picture element module, the chromatic component for calculating each pixel in the geometric areas;And
Regions module, is not equal to the difference pixel of default colourity, and count for filtering out the chromatic component
Interval corresponding to the difference pixel is to be used as occlusion area.
The face identification method and face identification system provided relative to prior art, the present invention, passes through statistics
The pixel different from the colour of skin in the geometric areas being made up of indicia framing and eyes, using as occlusion area,
Sample, training parameter need not be collected and do not disturbed by user behavior, with accuracy is high, calculating speed
It hurry up and recognize flexible advantage.
Brief description of the drawings
Fig. 1 is the schematic flow sheet for the face identification method that the embodiment of the present invention one is provided;
Fig. 2 is the schematic flow sheet for the face identification method that the embodiment of the present invention two is provided;
Fig. 3 is the module diagram for the face identification system that the embodiment of the present invention three is provided;
Fig. 4 is the module diagram for the face identification system that the embodiment of the present invention four is provided;
Fig. 5 is the geometry of the face identification method and face identification system provided in the embodiment of the present invention one to four
The schematic diagram in region;
Fig. 6 is the module diagram of terminal device provided in an embodiment of the present invention.
Embodiment
The schema in accompanying drawing is refer to, wherein identical element numbers represent identical component, original of the invention
Reason is to implement to illustrate in an appropriate computing environment.The following description is based on exemplified sheet
The specific embodiment of invention, it is not construed as the limitation present invention other specific embodiments not detailed herein.
The principle of the invention illustrates that it is not represented as a kind of limitation, those skilled in the art with above-mentioned word
It will appreciate that plurality of step as described below and operation also may be implemented among hardware.The principle of the present invention makes
Operated with many other wide usages or specific purpose computing, communication environment or configuration.
The face identification method and face identification system provided in the present invention, mainly should in mobile phone, computer or
Camera etc. has the terminal device of image store function, and the processing of recognition of face is carried out to the image that user selects.
Present invention can apply to identification, In vivo detection, intention camera, U.S. figure software, social platform (such as
QQ spaces, face wall) etc. in scene.
Following examples are refer to, embodiment one, two lays particular emphasis on face identification method, embodiment three, four sides
Overweight face identification system.It is appreciated that:Although each embodiment stresses difference, its design philosophy
It is consistent.And, the part not being described in detail in certain embodiments may refer to the detailed of specification full text
Description, is repeated no more.
Embodiment one
Referring to Fig. 1, showing the basic procedure schematic diagram of face identification method.The face identification method,
It is commonly executed in terminal device.
The face identification method, for detecting the occlusion area in face, wherein, blocking in the present invention
Region, is primarily referred to as hair, also can detect the shelters such as beard, glasses, here is omitted.
The face identification method, including:
In step S101, Face datection is carried out to image, and detected face is marked.
Wherein, face can be marked by indicia framing, the indicia framing is typically to use rectangle frame, right
The up to forehead of face, under confined to the region of ears to chin, left and right.Its mode realized, such as
Pass through opening for OpenCV (Open Source Computer Vision Library, computer vision of increasing income storehouse)
The Face datection algorithm in source, Face datection is carried out to image.
In step s 102, the face of mark is positioned, to obtain central point and the right side of left eye respectively
The central point of eye.
Such as, facial contour can be positioned by multiple profile points, including:Face contour, eyebrow,
Eyes, nose and face;With the profile point of wherein eyes to obtain the central point of eyes respectively.
In step s 103, by the upside of the central point of the left eye, the central point of the right eye and indicia framing
Two summits constitute geometric areas.
As shown in figure 5, being the schematic diagram of the geometric areas.By taking trapezoid area R as an example, wherein four tops
Point includes:Left apex coordinate value fL (x, y), the coordinate value fR (x, y) on right summit, the center of left eye
The coordinate value L (x, y) of point and the central point of right eye coordinate value R (x, y).
In step S104, the chromatic component of each pixel in the geometric areas is calculated.
Specifically, the calculation procedure of the chromatic component, including:
(1) number of pixels in the geometric areas is obtained;
(2) red R values, green G values and the indigo plant B values of each pixel are obtained;And
(3) chromatic component of each pixel is calculated, wherein the chromatic component depends on each pixel
R values, G values, B values and default constant value.
Specifically, the chromatic component includes:Red chrominance component and/or chroma blue component;Wherein:
The red chrominance component Cr=aR-bG-cB+d,
The chroma blue component Cb=aB-bR-cG+d, wherein, a, b, c and d are constant value.
In step S105, filter out the chromatic component and be not equal to the difference pixel of default colourity, and count
Interval corresponding to the difference pixel is to be used as occlusion area.
Specifically, the statistic procedure includes:
(1) colour of skin is set, and default colourity interval is generated according to the set colour of skin, such as:Yellow-toned skin.
(2) hair color is set, the hair color is contrasted with the colour of skin, comparing result is obtained,
Wherein common hair color, such as:Black hair, white hair, coffee-like hair etc..
(3) chromatic component is filtered out according to the comparing result to be not more than or not less than the difference of default colourity
Aniseikania element, and count the ratio of the difference pixel and the geometric areas, as blocking ratio, wherein,
It is not equal to the default colourity in the chromatic component interval, i.e.,:The pixel different from the colour of skin is distinguished in contrast
Point, to be used as shelter target.Meanwhile, do not calculated with single pixel, but accumulative ratio, to ask for area
Thresholding, makes the result of generation more accurate, with a high credibility.
(4) block whether ratio is more than default shielding rate described in judging.
(5) if more than the default shielding rate, regarding the corresponding region of ratio of blocking as occlusion area.
(6) if being not more than, ignore it is described block the corresponding region of ratio, be regarded as the noise in image.
Face identification method provided in an embodiment of the present invention, by counting the geometry being made up of indicia framing and eyes
The pixel different from the colour of skin in region, using as occlusion area, without collect sample, training parameter,
And do not disturbed by user behavior, with accuracy is high, calculating speed is fast and recognizes flexible advantage.
Embodiment two
Referring to Fig. 2, showing the detailed process schematic diagram of face identification method.The face identification method,
It is commonly executed in terminal device.
Step in Fig. 2, it is different from Fig. 1 to be started with S2, still started with Fig. 1 identicals with S1, with
Show its difference.
The face identification method, for detecting the occlusion area in face, including:
In step s 201, Face datection is carried out to image, and the face detected is carried out by indicia framing
Mark.
Specifically, the forming step of the indicia framing includes:
(1) by Opencv Face datection algorithm of increasing income, Face datection is carried out to image;
(2) detected face is marked by indicia framing;And
(3) coordinate value and the coordinate value on right summit on the left summit of indicia framing are obtained.
In step s 102, the face in the indicia framing is positioned, to obtain the center of left eye respectively
The central point of point and right eye.
Specifically, the step of positioning, including:
(1) the profile point number of eyes is preset;
(2) by the profile point of predetermined number, left eye profile and right eye profile are described respectively, to determine
Left eye profile and right eye profile;
(3) coordinate value of the central point of left eye is calculated according to the left eye profile;And
(4) coordinate value of the central point of right eye is calculated according to the right eye profile.
In step S202, the central point of the left eye is connected, on the central point and indicia framing of the right eye
The summit of side two, to constitute geometric areas.
As shown in figure 5, being the schematic diagram of the geometric areas.By taking trapezoid area R as an example, its summit includes:
Left apex coordinate fL (x, y), the coordinate fR (x, y) on right summit, the central point of left eye coordinate L (x,
Y) and right eye central point coordinate R (x, y).It is understood that in the geometric areas,
Most easily blocked by things such as bang, cap, hair decorations.
In step S203, the red chrominance component of each pixel in the geometric areas is calculated.
Specifically, the calculation procedure of the chromatic component, including:
(1) the number of pixels M in the geometric areas is obtained;
(2) R values, G values and the B values of each pixel are obtained;And
(3) red chrominance component of each pixel is calculated:
Red chrominance component Cr=aR-bG-cB+d, wherein:A=0.5;B=0.4187;C=0.0813;D=128.
In step S204, according to the colour of skin and the common color of shelter, the default colourity of generation and contrast symbol.
Wherein, by taking yellow-toned skin as an example, its default chrominance C r=3;Again using hair color as black or coffee color
Exemplified by, Cr < 3 dark pixels point is taken by above-mentioned formula.
In step S105, filter out the chromatic component and be not equal to the difference pixel of default colourity, and count
Interval corresponding to the difference pixel is to be used as occlusion area.
Specifically, the statistic procedure includes:
(1) the number m of the dark pixels point is counted;
(2) the ratio m/M shared by dark pixels point is calculated;
(3) judge whether the ratio m/M is more than default shielding rate, such as 0.75;
(4) if more than the default shielding rate, regarding the corresponding region of ratio of blocking as occlusion area;
(5) if no more than described default shielding rate, ignores the region, be regarded as the noise in image.
Face identification method provided in an embodiment of the present invention, by counting the geometry being made up of indicia framing and eyes
The pixel different from the colour of skin in region, using as occlusion area, without collect sample, training parameter,
And do not disturbed by user behavior, with accuracy is high, calculating speed is fast and recognizes flexible advantage.
Embodiment three
Referring to Fig. 3, showing the basic module schematic diagram of face identification system.The face identification system,
It is commonly executed in terminal device.
The face identification system 300, for detecting the occlusion area in face, wherein, in the present invention
Occlusion area, is primarily referred to as hair, it is to be understood that the face identification system based on the present invention, also may be used
The shelters such as beard, glasses are detected, because estimating method is simple, here is omitted.
The face identification system 300, including:Detection module 31, locating module 32, selecting module 33,
Picture element module 34 and regions module 35.
Specifically, the detection module 31, for carrying out Face datection to image, and to detected
Face is marked.
Wherein, it can be marked by indicia framing, the indicia framing is typically to use rectangle frame, to face
Up to forehead, under confined to the region of ears to chin, left and right.
The locating module 32, is connected to the detection module 31, is positioned for the face to mark,
To obtain the central point of left eye and the central point of right eye respectively.
Specifically, the locating module 32 is positioned by multiple profile points to facial contour, especially
Eyes, then by the profile point of eyes to obtain the central point of eyes respectively.
The selecting module 33, is connected to the locating module 32, for by the central point of the left eye, institute
State two summits on the upside of the central point and indicia framing of right eye and constitute geometric areas.
As shown in figure 5, being the schematic diagram of the geometric areas, by taking trapezoid area R as an example, its summit includes:
The left apex coordinate value fL (x, y) of indicia framing, the coordinate value fR (x, y) on right summit, the center of left eye
The coordinate value L (x, y) of point and the central point of right eye coordinate value R (x, y).
The picture element module 34, is connected to the selecting module 33, each in the geometric areas for calculating
The chromatic component of pixel.
Specifically, the picture element module 34 includes:
Quantity submodule 341, for obtaining the number of pixels in the geometric areas;
Monochromatic submodule 342, is connected to the quantity submodule 341, the R for obtaining each pixel
Value, G values and B values;And
Component submodule 343, is connected to the monochromatic submodule 342, the color for calculating each pixel
Component is spent, wherein the chromatic component is depending on the R values of each pixel, G values, B values and presets
Constant value.
Specifically, the chromatic component includes:Red chrominance component and/or chroma blue component;Wherein:
The red chrominance component Cr=aR-bG-cB+d,
The chroma blue component Cb=aB-bR-cG+d, wherein, a, b, c and d are constant value.
The regions module 35, is connected to the picture element module 34, for filtering out the chromatic component
In the difference pixel of default colourity, and the interval corresponding to the difference pixel is counted to be used as occlusion area.
Specifically, the regions module 35 includes:
Colour of skin submodule 351, default colourity is generated for setting the colour of skin, and according to the set colour of skin;
Color development submodule 352, is carried out for setting hair color, and to the hair color and the colour of skin
Contrast, obtains comparing result;
Ratio submodule 353, is not more than or not for filtering out the chromatic component according to the comparing result
Less than the difference pixel of default colourity, and the ratio of the difference pixel and the geometric areas is counted, as
Block ratio;
Judging submodule 354, for judging the ratio of blocking whether more than default shielding rate;And
As a result submodule 355, for when more than the default shielding rate, to block ratio corresponding by described
Region is used as occlusion area.
Face identification system provided in an embodiment of the present invention, by counting the geometry being made up of indicia framing and eyes
The pixel different from the colour of skin in region, using as occlusion area, without collect sample, training parameter,
And do not disturbed by user behavior, with accuracy is high, calculating speed is fast and recognizes flexible advantage.
Example IV
Referring to Fig. 4, showing the detailed module diagram of face identification system.The face identification system,
It is commonly executed in terminal device, for detecting the occlusion area in face.
The face identification system 400, including:Detection module 41, locating module 42, selecting module 43,
Picture element module 44 and regions module 45.
Specifically, the detection module 41, for carrying out Face datection to image, and passes through indicia framing pair
Detected face is marked.
Wherein, the detection module 41 includes:
Face submodule 411, for by Face datection algorithm of increasing income, Face datection to be carried out to image;
Mark submodule 412, be connected to the face submodule 411, for by indicia framing to detected
Face be marked;And
First coordinate submodule 413, is connected to the mark submodule 412, the left top for obtaining indicia framing
The coordinate value of point and the coordinate value on right summit.
The locating module 42, is connected to the detection module 41, for entering to the face in the indicia framing
Row positioning, to obtain the central point of left eye and the central point of right eye respectively.
Wherein, the locating module 42 includes:
Points submodule 421, the profile point number for presetting eyes;
Profile submodule 422, is connected to the points submodule 421, for the profile point by predetermined number,
Left eye profile and right eye profile are described respectively, to determine left eye profile and right eye profile;And
Second coordinate submodule 423, is connected to the profile submodule 422, for according to the left eye profile
Calculate the coordinate value of the central point of left eye and the coordinate value of the central point of right eye is calculated according to the right eye profile.
The selecting module 43, is connected to the locating module 42, for the central point by the eyes, mark
Remember that two summits constitute geometric areas on the upside of frame.
As shown in figure 5, being the schematic diagram of the geometric areas, by taking trapezoid area R as an example, its summit includes:
The left apex coordinate value fL (x, y) of indicia framing, the coordinate value fR (x, y) on right summit, the center of left eye
The coordinate value L (x, y) of point and the central point of right eye coordinate value R (x, y).It may be appreciated
It is in the geometric areas, most easily to be blocked by things such as bang, cap, hair decorations.
Further, it is also possible to the summit of downside two by obtaining nose and indicia framing, with triangle region,
To detect beard.Ibid, here is omitted for principle.
The picture element module 44, is connected to the selecting module 43, each in the geometric areas for calculating
The red chrominance component of pixel.
Specifically, the picture element module 44 includes:
Quantity submodule 441, for obtaining the number of pixels M in the geometric areas;
Monochromatic submodule 442, is connected to the quantity submodule 441, the R for obtaining each pixel
Value, G values and B values;And
Red component submodule 443, is connected to the monochromatic submodule 442, for calculating each pixel
Red chrominance component:
Red chrominance component Cr=aR-bG-cB+d, wherein:A=0.5;B=0.4187;C=0.0813;D=128.
The regions module 45, is connected to the picture element module 44, for filtering out the chromatic component
In the difference pixel of default colourity, and the interval corresponding to the difference pixel is counted to be used as occlusion area.
Specifically, the regions module 45 includes:
Colour of skin submodule 451, default colourity is generated for setting the colour of skin, and according to the set colour of skin, with
Exemplified by the yellow-toned skin of Chinese, the interval Cr=3 of its default colourity can be obtained;
Color development submodule 452, is carried out for setting hair color, and to the hair color and the colour of skin
Contrast, obtains comparing result, specifically, so that hair color is black or coffee color as an example, by above-mentioned
Contrasted with the colour of skin for Huang, choose Cr < 3 dark pixels point;
Ratio submodule 453, is not more than or not for filtering out the chromatic component according to the comparing result
Less than the difference pixel of default colourity, and the ratio of the difference pixel and the geometric areas is counted, as
Ratio is blocked, specifically, including:Count the number m of the dark pixels point;Calculate the dark picture
Ratio m/M shared by vegetarian refreshments;
Judging submodule 454, it is described default for whether judging the ratio of blocking more than default shielding rate
Shielding rate such as 0.75;And
As a result submodule 455, for when more than the default shielding rate, to block ratio corresponding by described
Region as occlusion area, and if no more than described default shielding rate, ignore and described block ratio correspondence
Region, be regarded as the noise in image.
Face identification system provided in an embodiment of the present invention, by counting the geometry being made up of indicia framing and eyes
The pixel different from the colour of skin in region, using as occlusion area, without collect sample, training parameter,
And do not disturbed by user behavior, with accuracy is high, calculating speed is fast and recognizes flexible advantage.
Embodiment five
It show the detailed module diagram of face identification system.The face identification system, is commonly executed in
In terminal, for detecting the occlusion area in face
Accordingly, the embodiment of the present invention also provides a kind of terminal device, for performing the face identification method
Or apply the face identification system.As shown in fig. 6, the terminal device, can include radio frequency (RF, Radio
Frequency) circuit 601, include one or more computer-readable recording mediums memory 602,
Input block 603, display unit 604, Wireless Fidelity (WiFi, Wireless Fidelity) module 607, bag
Include the part such as one or the processor 608 and power supply 609 of more than one processing core.This area
Technical staff is appreciated that the restriction of the terminal structure shown in Fig. 6 not structure paired terminal, can include
Than illustrating more or less parts, some parts or different parts arrangement are either combined.Wherein:
RF circuits 601 can be used for receive and send messages or communication process in, the reception and transmission of signal, especially,
After the downlink information of base station is received, transfer to one or more than one processor 608 is handled;In addition, will
It is related to up data and is sent to base station.
Memory 602 can be used for storage software program and module, and processor 608 is stored in by operation
The software program and module of reservoir 602, so as to perform various function application and data processing.Memory
602 can mainly include storing program area and storage data field, wherein, storing program area can storage program area,
Application program (such as sound-playing function, image player function etc.) needed at least one function etc.;Deposit
Storage data field can be stored uses created data (such as voice data, phone directory etc.) etc. according to terminal.
In addition, memory 602 can include high-speed random access memory, nonvolatile memory can also be included,
For example, at least one disk memory, flush memory device or other volatile solid-state parts.Correspondingly,
Memory 602 can also include Memory Controller, be deposited with providing 603 pairs of processor 608 and input block
The access of reservoir 602.
Input block 603 can be used for receive input numeral or character information, and produce with user set with
And the relevant keyboard of function control, mouse, action bars, optics or the input of trace ball signal.Specifically,
In a specific embodiment, input block 603 may include touch sensitive surface and other input equipments.Touch
Sensitive surfaces, also referred to as touch display screen or Trackpad, collect touch operation of the user on or near it
(such as user is using any suitable objects such as finger, stylus or annex on touch sensitive surface or in touch-sensitive table
Operation near face), and corresponding attachment means are driven according to formula set in advance.Optionally, it is touch-sensitive
Surface may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus detection is used
The touch orientation at family, and the signal that touch operation is brought is detected, transmit a signal to touch controller;Touch
Controller receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processing
Device 608, and the order sent of reception processing device 608 and can be performed.Furthermore, it is possible to using resistance-type,
The polytypes such as condenser type, infrared ray and surface acoustic wave realize touch sensitive surface.Except touch sensitive surface, input
Unit 603 can also include other input equipments.Specifically, other input equipments can include but is not limited to
Physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operation
One or more in bar etc..
Display unit 604 can be used for the information and terminal for showing the information inputted by user or being supplied to user
Various graphical user interface, these graphical user interface can by figure, text, icon, video and its
It is combined to constitute.Display unit 604 may include display panel, optionally, can use liquid crystal display
Device (LCD, Liquid Crystal Display), Organic Light Emitting Diode (OLED, Organic
Light-Emitting Diode) etc. form configure display panel.Further, touch sensitive surface can cover aobvious
Show panel, after touch sensitive surface detects the touch operation on or near it, send to processor 608 with
The type of touch event is determined, is provided on a display panel according to the type of touch event with preprocessor 608
Corresponding visual output.Although in figure 6, touch sensitive surface and display panel are as two independent parts
To realize input and input function, but in some embodiments it is possible to by touch sensitive surface and display panel collection
Into and realize input and output function.
WiFi belongs to short range wireless transmission technology, and terminal device can help to use by WiFi module 607
Family sends and receive e-mail, browses webpage and access streaming video etc., and it is mutual that it has provided the user wireless broadband
Internet interview.Although Fig. 6 shows WiFi module 607, but it is understood that, it is simultaneously not belonging to end
Must be configured into for end, can be omitted in the essential scope for do not change invention as needed completely.
Processor 608 is the control centre of terminal device, utilizes various interfaces and connection whole mobile phone
Various pieces, software program and/or module in memory 602 are stored in by operation or execution, and adjust
With the data being stored in memory 602, the various functions and processing data of terminal are performed, so as to mobile phone
Carry out integral monitoring.Optionally, processor 608 may include one or more processing cores;It is preferred that, place
Manage device 608 can integrated application processor and modem processor, wherein, application processor mainly handles behaviour
Make system, user interface and application program etc., modem processor mainly handles radio communication.It can manage
Solution, above-mentioned modem processor can not also be integrated into processor 608.
Terminal also includes the power supply 609 (such as battery) powered to all parts, it is preferred that power supply can be with
It is logically contiguous by power-supply management system and processor 608, so as to realize that management is filled by power-supply management system
The functions such as electricity, electric discharge and power managed.Power supply 609 can also include one or more direct current
Or AC power, recharging system, power failure detection circuit, power supply changeover device or inverter, power supply
The random components such as positioning indicator.
Although not shown, terminal can also include camera, bluetooth module etc., will not be repeated here.Specifically
In the present embodiment, the processor 608 in terminal can be according to following instruction, will be one or more
The corresponding executable file of process of application program is loaded into memory 602, and is transported by processor 608
The application program of row storage in the memory 602, so as to realize above-mentioned face identification functions.
Face identification method and face identification system provided in an embodiment of the present invention belong to same design, and its is specific
Implementation process refers to specification in full, and here is omitted.
In summary, although the present invention is disclosed above with preferred embodiment, but above preferred embodiment is not
To limit the present invention, one of ordinary skill in the art, without departing from the spirit and scope of the present invention,
Various changes can be made to be defined by the scope that claim is defined with retouching, therefore protection scope of the present invention.
Claims (10)
1. a kind of face identification method, for detecting the occlusion area in face, it is characterised in that including:
Face datection is carried out to image, and detected face is marked;
The face of mark is positioned, to obtain the central point of left eye and the central point of right eye respectively;
Two summits on the upside of the central point of the left eye, the central point of the right eye and indicia framing are constituted into geometry
Region;
Calculate the chromatic component of each pixel in the geometric areas;And
Filter out the chromatic component and be not equal to the difference pixel of default colourity, and count the difference pixel institute
Corresponding interval is to be used as occlusion area.
2. face identification method as claimed in claim 1, it is characterised in that calculate in the geometric areas
The chromatic component of each pixel, is specifically included:
Obtain the number of pixels in the geometric areas;
Obtain the red R values, green G values and indigo plant B values of each pixel;And
The chromatic component of each pixel is calculated, wherein the chromatic component depends on each pixel
R values, G values, B values and default constant value.
3. face identification method as claimed in claim 1 or 2, it is characterised in that filter out the colourity
Component is not equal to the difference pixel of default colourity, and counts the interval corresponding to the difference pixel to be used as screening
Region is kept off, is specifically included:
The colour of skin is set, and default colourity is generated according to the set colour of skin;
Hair color is set, the hair color is contrasted with the colour of skin, comparing result is obtained;
The chromatic component is filtered out according to the comparing result to be not more than or not less than the difference picture of default colourity
Element, and count the ratio of the difference pixel and the geometric areas, as blocking ratio;
Block whether ratio is more than default shielding rate described in judging;And
If more than the default shielding rate, regarding the corresponding region of ratio of blocking as occlusion area.
4. face identification method as claimed in claim 1, it is characterised in that Face datection is carried out to image,
And detected face is marked, specifically include:
By Face datection algorithm of increasing income, Face datection is carried out to image;
Detected face is marked by indicia framing;And
Obtain the left summit of indicia framing and the coordinate value on right summit.
5. the face identification method as described in claim 1 or 4, it is characterised in that enter to the face of mark
Row positioning, to obtain the central point of left eye and the central point of right eye respectively, is specifically included:
The profile point number of default eyes;
By the profile point of predetermined number, left eye profile and right eye profile are described respectively, to determine a left side
Eye profile and right eye profile;
The coordinate value of the central point of left eye is calculated according to the left eye profile;And
The coordinate value of the central point of right eye is calculated according to the right eye profile.
6. a kind of face identification system, for detecting the occlusion area in face, it is characterised in that including:
Detection module, for carrying out Face datection to image, and detected face is marked;
Locating module, is positioned for the face to mark, to obtain central point and the right side of left eye respectively
The central point of eye;
Selecting module, for by the upside of the central point of the left eye, the central point of the right eye and indicia framing
Two summits constitute geometric areas;
Picture element module, the chromatic component for calculating each pixel in the geometric areas;And
Regions module, is not equal to the difference pixel of default colourity, and count for filtering out the chromatic component
Interval corresponding to the difference pixel is to be used as occlusion area.
7. face identification system as claimed in claim 6, it is characterised in that the picture element module includes:
Quantity submodule, for obtaining the number of pixels in the geometric areas;
Monochromatic submodule, R values, G values and B values for obtaining each pixel;And
Component submodule, the chromatic component for calculating each pixel, wherein the chromatic component depends on
In the R values of each pixel, G values, B values and default constant value.
8. face identification system as claimed in claims 6 or 7, it is characterised in that the regions module bag
Include:
Colour of skin submodule, default colourity interval is generated for setting the colour of skin, and according to the set colour of skin;
Color development submodule, is contrasted for setting hair color, and to the hair color with the colour of skin,
Obtain comparing result;
Ratio submodule, is not more than or is not less than for filtering out the chromatic component according to the comparing result
The difference pixel of default colourity, and count the ratio of the difference pixel and the geometric areas, as blocking
Ratio;
Judging submodule, for judging the ratio of blocking whether more than default shielding rate;And
As a result submodule, for when more than the default shielding rate, the corresponding region of ratio to be blocked by described
It is used as occlusion area.
9. face identification system as claimed in claim 6, it is characterised in that the detection module includes:
Face submodule, for by Face datection algorithm of increasing income, Face datection to be carried out to image;
Submodule is marked, for detected face to be marked by indicia framing;And
First coordinate submodule, for obtaining the left summit of indicia framing and the coordinate value on right summit.
10. the face identification system as described in claim 6 or 9, it is characterised in that the locating module
Including:
Points submodule, the profile point number for presetting eyes;
Profile submodule, for the profile point by predetermined number, enters to left eye profile and right eye profile respectively
Row description, to determine left eye profile and right eye profile;And
Second coordinate submodule, the coordinate value, simultaneously of the central point for calculating left eye according to the left eye profile
The coordinate value of the central point of right eye is calculated according to the right eye profile.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610039995.1A CN106991360B (en) | 2016-01-20 | 2016-01-20 | Face identification method and face identification system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610039995.1A CN106991360B (en) | 2016-01-20 | 2016-01-20 | Face identification method and face identification system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106991360A true CN106991360A (en) | 2017-07-28 |
CN106991360B CN106991360B (en) | 2019-05-07 |
Family
ID=59413688
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610039995.1A Active CN106991360B (en) | 2016-01-20 | 2016-01-20 | Face identification method and face identification system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106991360B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113691857A (en) * | 2021-08-27 | 2021-11-23 | 贵州东冠科技有限公司 | Lip language shielding system and method based on augmented reality |
CN114708543A (en) * | 2022-06-06 | 2022-07-05 | 成都信息工程大学 | Examination student positioning method in examination room monitoring video image |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1763765A (en) * | 2004-10-21 | 2006-04-26 | 佳能株式会社 | Method, device and storage medium for detecting face complexion area in image |
JP4076777B2 (en) * | 2002-03-06 | 2008-04-16 | 三菱電機株式会社 | Face area extraction device |
CN102147852A (en) * | 2010-02-04 | 2011-08-10 | 三星电子株式会社 | Method for detecting hair area |
CN103577838A (en) * | 2013-11-25 | 2014-02-12 | 苏州大学 | Face recognition method and device |
CN103996203A (en) * | 2014-06-13 | 2014-08-20 | 北京锐安科技有限公司 | Method and device for detecting whether face in image is sheltered |
CN105095829A (en) * | 2014-04-29 | 2015-11-25 | 华为技术有限公司 | Face recognition method and system |
-
2016
- 2016-01-20 CN CN201610039995.1A patent/CN106991360B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4076777B2 (en) * | 2002-03-06 | 2008-04-16 | 三菱電機株式会社 | Face area extraction device |
CN1763765A (en) * | 2004-10-21 | 2006-04-26 | 佳能株式会社 | Method, device and storage medium for detecting face complexion area in image |
CN102147852A (en) * | 2010-02-04 | 2011-08-10 | 三星电子株式会社 | Method for detecting hair area |
CN103577838A (en) * | 2013-11-25 | 2014-02-12 | 苏州大学 | Face recognition method and device |
CN105095829A (en) * | 2014-04-29 | 2015-11-25 | 华为技术有限公司 | Face recognition method and system |
CN103996203A (en) * | 2014-06-13 | 2014-08-20 | 北京锐安科技有限公司 | Method and device for detecting whether face in image is sheltered |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113691857A (en) * | 2021-08-27 | 2021-11-23 | 贵州东冠科技有限公司 | Lip language shielding system and method based on augmented reality |
CN114708543A (en) * | 2022-06-06 | 2022-07-05 | 成都信息工程大学 | Examination student positioning method in examination room monitoring video image |
CN114708543B (en) * | 2022-06-06 | 2022-08-30 | 成都信息工程大学 | Examination student positioning method in examination room monitoring video image |
Also Published As
Publication number | Publication date |
---|---|
CN106991360B (en) | 2019-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109191410A (en) | A kind of facial image fusion method, device and storage medium | |
US10783353B2 (en) | Method for detecting skin region and apparatus for detecting skin region | |
CN107231529A (en) | Image processing method, mobile terminal and storage medium | |
CN108875451B (en) | Method, device, storage medium and program product for positioning image | |
CN107256555A (en) | A kind of image processing method, device and storage medium | |
US20150049924A1 (en) | Method, terminal device and storage medium for processing image | |
CN106296617B (en) | The processing method and processing device of facial image | |
CN110163806A (en) | A kind of image processing method, device and storage medium | |
CN108307125A (en) | A kind of image-pickup method, device and storage medium | |
CN103400108A (en) | Face identification method and device as well as mobile terminal | |
CN107451979A (en) | A kind of image processing method, device and storage medium | |
CN107613202A (en) | A kind of image pickup method and mobile terminal | |
CN104463105B (en) | Guideboard recognition methods and device | |
CN108259746A (en) | A kind of image color detection method and mobile terminal | |
CN106469443A (en) | Machine vision feature tracking systems | |
CN108701365A (en) | Luminous point recognition methods, device and system | |
CN103325107A (en) | Method, device and terminal device for processing image | |
CN107111882A (en) | Striped set lookup method, device and system | |
CN105892612A (en) | Method and apparatus for powering off terminal | |
CN108038431A (en) | Image processing method, device, computer equipment and computer-readable recording medium | |
CN107895352A (en) | A kind of image processing method and mobile terminal | |
CN108416337A (en) | User is reminded to clean the method and device of camera lens | |
CN109238460A (en) | A kind of method and terminal device obtaining ambient light intensity | |
CN106713696A (en) | Image processing method and device | |
CN103616954A (en) | Virtual keyboard system, implementation method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210924 Address after: 518057 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 floors Patentee after: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd. Patentee after: TENCENT CLOUD COMPUTING (BEIJING) Co.,Ltd. Address before: 2, 518000, East 403 room, SEG science and Technology Park, Zhenxing Road, Shenzhen, Guangdong, Futian District Patentee before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd. |