CN103366162A - Method and device for determining states of eyes - Google Patents

Method and device for determining states of eyes Download PDF

Info

Publication number
CN103366162A
CN103366162A CN2013102939039A CN201310293903A CN103366162A CN 103366162 A CN103366162 A CN 103366162A CN 2013102939039 A CN2013102939039 A CN 2013102939039A CN 201310293903 A CN201310293903 A CN 201310293903A CN 103366162 A CN103366162 A CN 103366162A
Authority
CN
China
Prior art keywords
image
eye
destination object
head portrait
sclera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013102939039A
Other languages
Chinese (zh)
Inventor
方奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN2013102939039A priority Critical patent/CN103366162A/en
Publication of CN103366162A publication Critical patent/CN103366162A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a method and device for determining the states of eyes, and relates to the field of image processing. The method and device for determining the states of the eyes are used for achieving precise recognition of the states of the eyes of a target object. The method for determining the states of the eyes comprises the steps of (1) obtaining a first eye image of the target object according to a first avatar image of the target object through the device for determining the states of the eyes, determining a template image according to the first eye image, (2) determining a second eye image in a second avatar image of the target object, obtaining the Euclidean distance between the second eye image and the template image, and (3) deleting the template image when the situation that the Euclidean distance is larger than or equal to a first preset threshold value is determined so as to determine a template image again after a next frame avatar image is obtained, and determining the states of the eyes of the target object according to the template image which is determined again. The method and device for determining the states of the eyes are used for determining the states of the eyes.

Description

A kind of method and apparatus of definite eye state
Technical field
The present invention relates to image processing field, relate in particular to a kind of method and apparatus of definite eye state.
Background technology
Now, digital camera equipment is widely used, and when using digital camera equipment that destination object is taken pictures, because the photographer grasps the impact of improper or environment, may occur in the photo that obtains that this destination object is closed one's eyes or the blink state.For avoiding this situation to occur, introduce the blink detection technology, this technology is by in the process of taking pictures, eye image to destination object detects, thereby determine the eye state of this destination object, when this destination object was in closed-eye state, then time-delay was taken pictures or is reminded the photographer again to take pictures.
Template matching method is a kind of common blink detection technology, the method is by positioning human eye, and determine the template image of opening eyes according to the head portrait image of destination object, mate current eye image by this template image of opening eyes, and determine the eye state of destination object according to matching result.But, when the head portrait image of destination object has greatly changed (for example this destination object is bowed or rotary head), mate current eye image according to this template image of opening eyes and can produce larger error, thereby cause the mistake of destination object eye state is identified.
Summary of the invention
Embodiments of the invention provide a kind of method and apparatus of definite eye state, to realize the accurate identification to the eye state of destination object.
For achieving the above object, embodiments of the invention adopt following technical scheme:
First aspect provides a kind of method of definite eye state, comprising:
According to the first eye image of the first head portrait Image Acquisition destination object of destination object, and determine template image according to described the first eye image;
In the second head portrait image of described destination object, determine the second eye image, and obtain the Euclidean distance of described the second eye image and described template image;
During more than or equal to the first predetermined threshold value, delete described template image in definite described Euclidean distance, with after obtaining next frame head portrait image, redefine template image, and determine the eye state of described destination object according to the described template image that redefines.
In the possible implementation of the first, the first eye image of described the first head portrait Image Acquisition destination object according to described destination object comprises:
Obtain broad sense horizontal projection function and the broad sense vertical projection function of described the first head portrait image;
Determine described the first eye image according to described broad sense horizontal projection function and broad sense vertical projection function.
In conjunction with first aspect or the possible implementation of the first, in the possible implementation of the second, describedly determine that according to described the first eye image template image comprises:
The the first sclera zone proportion that presents sclera in described the first eye image determines that described the first eye image is template image during more than or equal to the second predetermined threshold value.
In conjunction with to the possible implementation of the second any of first aspect, in the third possible implementation, the Euclidean distance of described the second eye image of described acquisition and described template image comprises:
Pass through formula:
T = Σ i = 0 j = 0 i = m j = n [ A ( i , j ) - B ( i , j ) ] 2
Obtain the Euclidean distance of described the second eye image and described template image; Wherein, T is the Euclidean distance of described the second eye image and described template image, A (i, j) is that described template image is at the pixel value of the coordinate (i, j) of the pixel of described the first head portrait image, B (i, j) be that described the second eye image is at the pixel value of the coordinate (i, j) of the pixel of described the second head portrait image, wherein, m is the boundary value of horizontal ordinate i, and n is the boundary value of ordinate j.
In conjunction with the second possible implementation or the third possible implementation, in the 4th kind of possible implementation, described determine the second eye image in according to the second head portrait image of described template image at described destination object after, also comprise:
Determine to present in described the second eye image the area in the second sclera zone of sclera;
Eye state according to the described destination object of area definition in the area in described the second sclera zone and described the first sclera zone.
In conjunction with the 4th kind of possible implementation, in the 5th kind of possible implementation, the eye state of the described described destination object of area definition according to described template image and described the second sclera zone comprises:
During more than or equal to the 3rd predetermined threshold value, determine that the eye state of described destination object is the state of opening eyes at the ratio of the area in the area in described the second sclera zone and described the first sclera zone;
During less than described the 3rd predetermined threshold value, the eye state of determining described destination object is closed-eye state at the ratio of the area in the area in described the second sclera zone and described the first sclera zone.
Second aspect provides a kind of eye state to determine equipment, comprising:
Acquiring unit is used for the first eye image according to the first head portrait Image Acquisition destination object of destination object, and determines template image according to described the first eye image;
Processing unit, be used for determining the second eye image at the second head portrait image of described destination object, and obtain the Euclidean distance of the template image that described the second eye image and described acquiring unit obtain, and in definite described Euclidean distance during more than or equal to the first predetermined threshold value, delete described template image, with after obtaining next frame head portrait image, redefine template image, and determine the eye state of described destination object according to the described template image that redefines.
In the possible implementation of second aspect the first, described acquiring unit specifically is used for, obtain broad sense horizontal projection function and the broad sense vertical projection function of described head portrait image, and determine described the first eye image according to described broad sense horizontal projection function and broad sense vertical projection function.
In conjunction with second aspect or the possible implementation of the first, in the possible implementation of the second, described acquiring unit specifically is used for, the the first sclera zone proportion that presents sclera in described the first eye image determines that described the first eye image is template image during more than or equal to the second predetermined threshold value.
In conjunction with to the possible implementation of the second any of second aspect, in the third possible implementation, described processing unit specifically is used for, and passes through formula:
T = Σ i = 0 j = 0 i = m j = n [ A ( i , j ) - B ( i , j ) ] 2
Obtain the Euclidean distance of described the second eye image and described template image; Wherein, T is the Euclidean distance of described the second eye image and described template image, A (i, j) is that described template image is at the pixel value of the coordinate (i, j) of the pixel of described the first head portrait image, B (i, j) be that described the second eye image is at the pixel value of the coordinate (i, j) of the pixel of described the second head portrait image, wherein, m is the boundary value of horizontal ordinate i, and n is the boundary value of ordinate j.
In conjunction with the second possible implementation or the third possible implementation, in the 4th kind of possible implementation, described processing unit also is used for, after determining the second eye image in according to the second head portrait image of described template image at described destination object, determine to present in described the second eye image the area in the second sclera zone of sclera, and according to the eye state of the described destination object of area definition in the area in described the second sclera zone and described the first sclera zone.
In conjunction with the 4th kind of possible implementation, in the 5th kind of possible implementation, described processing unit specifically is used for, during more than or equal to the 3rd predetermined threshold value, determine that the eye state of described destination object is the state of opening eyes at the ratio of the area in the area in described the second sclera zone and described the first sclera zone;
During less than described the 3rd predetermined threshold value, the eye state of determining described destination object is closed-eye state at the ratio of the area in the area in described the second sclera zone and described the first sclera zone.
By adopting such scheme, eye state determines that equipment is according to the first eye image of the first head portrait Image Acquisition destination object of destination object, and determine template image according to this first eye image, in the second head portrait image of this destination object, determine the second eye image according to this template image, and obtain the Euclidean distance of this second eye image and this template image, in definite this Euclidean distance during more than or equal to the first predetermined threshold value, delete this template image, with after obtaining next frame head portrait image, redefine template image, and determine the eye state of described destination object according to the described template image that redefines.Like this, when eye state determines that equipment is determined this Euclidean distance more than or equal to the first predetermined threshold value, then determine differing greatly of current eye image and template image, therefore, by deleting this template image, so that determining equipment, eye state in next frame head portrait image, again obtains template image, thereby avoided the mistake identification to the eye state of current goal object that causes owing to differing greatly of current eye image and template image, realized the accurate identification to the eye state of destination object.
Description of drawings
In order to be illustrated more clearly in the embodiment of the invention or technical scheme of the prior art, the below will do to introduce simply to the accompanying drawing of required use in embodiment or the description of the Prior Art, apparently, accompanying drawing in the following describes only is some embodiments of the present invention, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain according to these accompanying drawings other accompanying drawing.
The method synoptic diagram of a kind of definite eye state that Fig. 1 provides for the embodiment of the invention;
The schematic flow sheet of a kind of definite eye state method that Fig. 2 provides for the embodiment of the invention;
A kind of eye state that Fig. 3 provides for the embodiment of the invention is determined the structural representation of equipment;
The another kind of eye state that Fig. 4 provides for the embodiment of the invention is determined the structural representation of equipment.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the invention, the technical scheme in the embodiment of the invention is clearly and completely described, obviously, described embodiment only is the present invention's part embodiment, rather than whole embodiment.Based on the embodiment among the present invention, those of ordinary skills belong to the scope of protection of the invention not making the every other embodiment that obtains under the creative work prerequisite.
Of particular note, the head portrait image of mentioning in the various embodiments of the present invention, refer to comprise in the image part zone of people's head zone or head zone, can only be the part zone (body region that for example, in the head portrait image, may also comprise the people) of people's head zone or head zone and do not limit in this image; Similar, the eye image of mentioning in the various embodiments of the present invention refers to comprise in the image part zone of people's eye areas or eye areas, and does not limit in this image the part zone of eye areas or the eye areas that can only be the people.
The embodiment of the invention provides a kind of method of definite eye state, and as shown in Figure 1, the executive agent of the method is that eye state is determined equipment, and the method comprises:
S101, eye state are determined equipment according to the first eye image of the first head portrait Image Acquisition destination object of destination object, and determine template image according to this first eye image.
Particularly, eye state determines that equipment obtains the first head portrait image of destination object, this the first head portrait image can be that eye state is determined the facial image that equipment detects and obtains people's face of destination object, and this first head portrait image carried out pre-service, this pre-service can comprise the processing such as gray processing and denoising, thereby this first head portrait image is converted to the gray scale head portrait by coloured image.
Alternatively, this eye state determines that equipment obtains broad sense horizontal projection function and the broad sense vertical projection function of this first head portrait image, and determine this first eye image according to this broad sense horizontal projection function and broad sense vertical projection function, particularly, the functional value of this broad sense horizontal projection function can pass through formula: hGPF (x)=(1-a) hIPF (x)+ahVPF (x) and obtain, and obtain the up-and-down boundary of this first eye image by the functional value of this broad sense horizontal projection function, wherein, hGPF(x) be the functional value of the broad sense horizontal projection function of this first head portrait image, a is constant, hIPF(x) be the gray-scale value mean value in the horizontal direction of each pixel of this first head portrait image, hVPF(x) be the variance projection function value of the gray-scale value of each pixel, in addition, this hIPF(x) can pass through formula:
hIPF ( x ) = 1 y 2 - y 1 ∫ y 1 y 2 I ( x , y ) dy
Determine; This hVPF(x) can pass through formula:
hVPF = 1 y 2 - y 1 Σ y i - y 1 y 2 [ I ( x , y ) - hIPF ( x ) ]
Determine, wherein, I(x, y) be the pixel value of the coordinate (x, y) of the pixel in this first head portrait image, y 2Be the coboundary of ordinate y, y 1Lower boundary for ordinate y;
The functional value of this broad sense vertical projection function can pass through formula: vGPF (y)=(1-a) vIPF (y)+avVPF (y) and obtain, and obtain the border, the left and right sides of this first eye image by the functional value of this broad sense vertical projection function, wherein, vGPF(x) be the functional value of the broad sense vertical projection function of this first head portrait image, a is constant, vIPF(x) be the gray-scale value mean value in vertical direction of each pixel of this first head portrait image, vVPF(x) be the variance projection function value of the gray-scale value of each pixel, in addition, this vIPF(x) can pass through formula:
vIPF ( y ) = 1 x 2 - x 1 ∫ x 1 x 2 I ( x , y ) dx
Determine; This vVPF(x) can pass through formula:
vVPF = 1 x 2 - x 1 Σ x i - x 1 x 2 [ I ( x , y ) - vIPF ( y ) ]
Determine, wherein, I(x, y) be the pixel value of the coordinate (x, y) of the pixel in this first head portrait image, x 2Be the coboundary of horizontal ordinate x, x 1Lower boundary for horizontal ordinate x;
Like this, determine respectively up-and-down boundary and the border, the left and right sides of this first eye image by broad sense horizontal projection function and broad sense vertical projection function, thereby obtained the first eye image of destination object.
Further, the first sclera zone proportion that presents sclera in this first eye image is during more than or equal to the second predetermined threshold value, and eye state determines that equipment determines that this first eye image is template image.
Wherein, sclera refers to the white portion of eyeball periphery, because the colouring discrimination of sclera and pupil and the colour of skin clearly, therefore, can be by the color model of training sclera, determine the first sclera zone, particularly, gather the sample of sclera, and set up Gauss's color model of sclera, in embodiments of the present invention, setting up of Gauss's color model is same as the prior art, has therefore repeated no more, like this, from the first eye image, determine the first sclera zone by above-mentioned Gauss's color model, then during more than or equal to the second predetermined threshold value, determine that this first eye image is template image, example ground at this first sclera zone proportion, when the ratio of the area of the area in this first sclera zone and the first eye image during more than or equal to the second predetermined threshold value, determine that this first eye image is template image.
S102, eye state are determined equipment definite second eye image in the second head portrait image of this destination object, and obtain the Euclidean distance of this second eye image and this template image.
Wherein, this second eye image can be the current eye image of destination object.
Particularly, eye state determines that equipment can determine the second eye image by broad sense horizontal projection function and broad sense vertical projection function, and its concrete implementation repeats no more with reference to above-mentioned description to the first eye image of obtaining destination object herein.
This eye state determines that equipment can also be according to template image definite second eye image in the second head portrait image of described this destination object, particularly, eye state determines that equipment mates template image and the second head portrait image, obtain matching value, and choose image corresponding to the zone of matching value maximum as the second eye image.
Further, eye state determines that equipment passes through formula:
T = Σ i = 0 j = 0 i = m j = n [ A ( i , j ) - B ( i , j ) ] 2
Obtain the Euclidean distance of this second eye image and this template image; Wherein, T is the Euclidean distance of this second eye image and this template image, A (i, j) is that this template image is at the pixel value of the coordinate (i, j) of the pixel of this first head portrait image, B (i, j) be that this second eye image is at the pixel value of the coordinate (i, j) of the pixel of this second head portrait image, wherein, m is the boundary value of horizontal ordinate i, and n is the boundary value of ordinate j.
In a kind of possible implementation of the embodiment of the invention, after eye state determines that equipment is determined the second eye image in according to the second head portrait image of this template image at this destination object, determine to present in this second eye image the area in the second sclera zone of sclera, and according to the eye state of this destination object of area definition in the area in this second sclera zone and this first sclera zone.
Wherein, the definite of this second sclera zone can determine by the color model of training sclera equally, repeats no more herein.
Particularly, eye state determines that equipment is at the ratio of the area in the area in this second sclera zone and this first sclera zone during more than or equal to the 3rd predetermined threshold value, the eye state of determining this destination object is the state of opening eyes, during less than the 3rd predetermined threshold value, the eye state of determining this destination object is closed-eye state at the ratio of the area in the area in this second sclera zone and this first sclera zone.
S103, eye state determine that equipment is in definite this Euclidean distance during more than or equal to the first predetermined threshold value, delete this template image, with after obtaining next frame head portrait image, redefine template image, and determine the eye state of described destination object according to the described template image that redefines.
Need to prove, when this eye state determines that equipment is in definite this Euclidean distance during more than or equal to the first predetermined threshold value, the matching degree relatively poor (bowing or rotary head such as the user) that namely represents this template image and this second eye image, if continue to adopt this template image, then determine that at this eye state equipment is when determining the eye state of next frame head portrait image, cause easily eye state to determine that equipment is to the mistake identification of the second eye image, cause the erroneous judgement to the eye state of destination object, therefore, this eye state is determined this template image of unit deletion, so that determining equipment, eye state in next frame head portrait image, again obtains template image, and determine the eye state of destination object according to the template image that this obtains again, thereby improve the degree of accuracy to the eye state identification of destination object.
In addition, eye state determines that equipment determines that according to the template image that redefines the eye state of destination object can with reference to above-mentioned steps S101 and step S102, repeat no more herein.
By adopting such scheme, eye state determines that equipment is according to the first eye image of the first head portrait Image Acquisition destination object of destination object, and determine template image according to this first eye image, in the second head portrait image of this destination object, determine the second eye image according to this template image, and obtain the Euclidean distance of this second eye image and this template image, in definite this Euclidean distance during more than or equal to the first predetermined threshold value, delete this template image, with after obtaining next frame head portrait image, redefine template image, and determine the eye state of described destination object according to the described template image that redefines.Like this, when eye state determines that equipment is determined this Euclidean distance more than or equal to the first predetermined threshold value, then determine differing greatly of current eye image and template image, therefore, by deleting this template image, so that determining equipment, eye state in next frame head portrait image, again obtains template image, thereby avoided the mistake identification to the eye state of current goal object that causes owing to differing greatly of current eye image and template image, realized the accurate identification to the eye state of destination object.
The embodiment of the invention provides a kind of method of definite eye state, and as shown in Figure 2, the method comprises:
S201, eye state determine that equipment obtains the first head portrait image of destination object.
Particularly, this the first head portrait image can be that eye state is determined the facial image that equipment detects and obtains people's face of destination object, and this first head portrait image carried out pre-service, this pre-service can comprise the processing such as gray processing and denoising, thereby this first head portrait image is converted to the gray scale head portrait by coloured image.
S202, eye state determine that equipment obtains broad sense horizontal projection function and the broad sense vertical projection function of this first head portrait image, and determine this first eye image according to this broad sense horizontal projection function and broad sense vertical projection function.
Particularly, the functional value of this broad sense horizontal projection function can pass through formula: hGPF (x)=(1-a) hIPF (x)+ahVPF (x) and obtain, and obtain the up-and-down boundary of this first eye image by the functional value of this broad sense horizontal projection function, wherein, hGPF(x) be the functional value of the broad sense horizontal projection function of this first head portrait image, a is constant, hIPF(x) be the gray-scale value mean value in the horizontal direction of each pixel of this first head portrait image, hVPF(x) be the variance projection function value of the gray-scale value of each pixel, in addition, this hIPF(x) can pass through formula:
hIPF ( x ) = 1 y 2 - y 1 ∫ y 1 y 2 I ( x , y ) dy
Determine; This hVPF(x) can pass through formula:
hVPF = 1 y 2 - y 1 Σ y i - y 1 y 2 [ I ( x , y ) - hIPF ( x ) ]
Determine, wherein, I(x, y) be the pixel value of the coordinate (x, y) of the pixel in this first head portrait image, y 2Be the coboundary of ordinate y, y 1Lower boundary for ordinate y;
The functional value of this broad sense vertical projection function can pass through formula: vGPF (y)=(1-a) vIPF (y)+avVPF (y) and obtain, and obtain the border, the left and right sides of this first eye image by the functional value of this broad sense vertical projection function, wherein, vGPF(x) be the functional value of the broad sense vertical projection function of this first head portrait image, a is constant, vIPF(x) be the gray-scale value mean value in vertical direction of each pixel of this first head portrait image, vVPF(x) be the variance projection function value of the gray-scale value of each pixel, in addition, this vIPF(x) can pass through formula:
vIPF ( y ) = 1 x 2 - x 1 ∫ x 1 x 2 I ( x , y ) dx
Determine; This vVPF(x) can pass through formula:
vVPF = 1 x 2 - x 1 Σ x i - x 1 x 2 [ I ( x , y ) - vIPF ( y ) ]
Determine, wherein, I(x, y) be the pixel value of the coordinate (x, y) of the pixel in this first head portrait image, x 2Be the coboundary of horizontal ordinate x, x 1Lower boundary for horizontal ordinate x;
Like this, determine respectively up-and-down boundary and the border, the left and right sides of this first eye image by broad sense horizontal projection function and broad sense vertical projection function, thereby obtained the first eye image of destination object.
When S203, eye state determined that equipment presents sclera in this first eye image the first sclera zone proportion is more than or equal to the second predetermined threshold value, eye state determined that equipment determines that this first eye image is template image.
Wherein, sclera refers to the white portion of eyeball periphery, because the colouring discrimination of sclera and pupil and the colour of skin clearly, therefore, can be by the color model of training sclera, determine the first sclera zone, particularly, gather the sample of sclera, and set up Gauss's color model of sclera, in embodiments of the present invention, setting up of Gauss's color model is same as the prior art, has therefore repeated no more, like this, from the first eye image, determine the first sclera zone by above-mentioned Gauss's color model, then during more than or equal to the second predetermined threshold value, determine that this first eye image is template image, example ground at this first sclera zone proportion, when the ratio of the area of the area in this first sclera zone and the first eye image during more than or equal to the second predetermined threshold value, determine that this first eye image is template image.
S204, eye state are determined equipment definite second eye image in the second head portrait image of this destination object.
Wherein, this second eye image can be the current eye image of destination object.
Particularly, eye state determines that equipment can determine the second eye image by broad sense horizontal projection function and broad sense vertical projection function, and its concrete implementation repeats no more with reference to the description of above-mentioned steps S102 herein.
S205, eye state determine that equipment determines to present in this second eye image the area in the second sclera zone of sclera, and according to the eye state of this destination object of area definition in the area in this second sclera zone and this first sclera zone.
Wherein, the definite of this second sclera zone can determine by the color model of training sclera equally, repeats no more herein.
Particularly, eye state determines that equipment is at the ratio of the area in the area in this second sclera zone and this first sclera zone during more than or equal to the 3rd predetermined threshold value, the eye state of determining this destination object is the state of opening eyes, during less than the 3rd predetermined threshold value, the eye state of determining this destination object is closed-eye state at the ratio of the area in the area in this second sclera zone and this first sclera zone.
S206, eye state determine that equipment obtains the Euclidean distance of this second eye image and this template image.
Particularly, eye state determines that equipment passes through formula:
T = Σ i = 0 j = 0 i = m j = n [ A ( i , j ) - B ( i , j ) ] 2
Obtain the Euclidean distance of this second eye image and this template image; Wherein, T is the Euclidean distance of this second eye image and this template image, A (i, j) is that this template image is at the pixel value of the coordinate (i, j) of the pixel of this first head portrait image, B (i, j) be that this second eye image is at the pixel value of the coordinate (i, j) of the pixel of this second head portrait image, wherein, m is the boundary value of horizontal ordinate i, and n is the boundary value of ordinate j.
S207, eye state determine that equipment during more than or equal to the first predetermined threshold value, deletes this template image in definite this Euclidean distance.
Need to prove, when this eye state determines that equipment is in definite this Euclidean distance during more than or equal to the first predetermined threshold value, the matching degree relatively poor (bowing or rotary head such as the user) that namely represents this template image and this second eye image, if continue to adopt this template image, then determine that at this eye state equipment is when determining the eye state of next frame head portrait image, cause easily eye state to determine that equipment is to the mistake identification of the second eye image, cause the erroneous judgement to the eye state of destination object, therefore, this eye state is determined this template image of unit deletion, so that determining equipment, eye state in next frame head portrait image, again obtains template image, and determine the eye state of destination object according to the template image that this obtains again, thereby improve the degree of accuracy to the eye state identification of destination object.
S208, eye state determine that equipment obtains next frame head portrait image, and redefine template image.
S209, eye state determine that equipment determines the eye state of this destination object according to this template image that redefines.
Eye state determines that equipment determines that according to the template image that redefines the eye state of destination object can with reference to above-mentioned steps S101 and step S102, repeat no more herein.
By adopting such scheme, eye state determines that equipment is according to the first eye image of the first head portrait Image Acquisition destination object of destination object, and determine template image according to this first eye image, in the second head portrait image of this destination object, determine the second eye image according to this template image, and obtain the Euclidean distance of this second eye image and this template image, in definite this Euclidean distance during more than or equal to the first predetermined threshold value, delete this template image, with after obtaining next frame head portrait image, redefine template image, and determine the eye state of described destination object according to the described template image that redefines.Like this, when eye state determines that equipment is determined this Euclidean distance more than or equal to the first predetermined threshold value, then determine differing greatly of current eye image and template image, therefore, by deleting this template image, so that determining equipment, eye state in next frame head portrait image, again obtains template image, thereby avoided the mistake identification to the eye state of current goal object that causes owing to differing greatly of current eye image and template image, realized the accurate identification to the eye state of destination object.
Need to prove, for above-mentioned embodiment of the method, for simple description, so it all is expressed as a series of combination of actions, but those skilled in the art should know, the present invention is not subjected to the restriction of described sequence of movement, because according to the present invention, some step can adopt other orders or carry out simultaneously.Secondly, those skilled in the art also should know, the embodiment described in the instructions all belongs to preferred embodiment, and related action and module might not be that the present invention is necessary.
The embodiment of the invention provides a kind of eye state to determine equipment 30, as shown in Figure 3, comprising:
Acquiring unit 30 is used for the first eye image according to the first head portrait Image Acquisition destination object of destination object, and determines template image according to this first eye image.
Particularly, eye state determines that equipment obtains the first head portrait image of destination object, this the first head portrait image can be that eye state is determined the facial image that equipment detects and obtains people's face of destination object, and this first head portrait image carried out pre-service, this pre-service can comprise the processing such as gray processing and denoising, thereby this first head portrait image is converted to the gray scale head portrait by coloured image.
Processing unit 31, be used for determining the second eye image at the second head portrait image of this destination object, and obtain the Euclidean distance of the template image that this second eye image and this acquiring unit 30 obtain, and in definite this Euclidean distance during more than or equal to the first predetermined threshold value, delete this template image, with after obtaining next frame head portrait image, redefine template image, and determine the eye state of this destination object according to the template image that this redefines.
Alternatively, this acquiring unit 30 specifically is used for, and obtains broad sense horizontal projection function and the broad sense vertical projection function of this head portrait image, and determines this first eye image according to this broad sense horizontal projection function and broad sense vertical projection function.
Particularly, the functional value of this broad sense horizontal projection function can pass through formula: hGPF (x)=(1-a) hIPF (x)+ahVPF (x) and obtain, and obtain the up-and-down boundary of this first eye image by the functional value of this broad sense horizontal projection function, wherein, hGPF(x) be the functional value of the broad sense horizontal projection function of this first head portrait image, a is constant, hIPF(x) be the gray-scale value mean value in the horizontal direction of each pixel of this first head portrait image, hVPF(x) be the variance projection function value of the gray-scale value of each pixel, in addition, this hIPF(x) can pass through formula:
hIPF ( x ) = 1 y 2 - y 1 ∫ y 1 y 2 I ( x , y ) dy
Determine; This hVPF(x) can pass through formula:
hVPF = 1 y 2 - y 1 Σ y i - y 1 y 2 [ I ( x , y ) - hIPF ( x ) ]
Determine, wherein, I(x, y) be the pixel value of the coordinate (x, y) of the pixel in this first head portrait image, y 2Be the coboundary of ordinate y, y 1Lower boundary for ordinate y;
The functional value of this broad sense vertical projection function can pass through formula: vGPF (y)=(1-a) vIPF (y)+avVPF (y) and obtain, and obtain the border, the left and right sides of this first eye image by the functional value of this broad sense vertical projection function, wherein, vGPF(x) be the functional value of the broad sense vertical projection function of this first head portrait image, a is constant, vIPF(x) be the gray-scale value mean value in vertical direction of each pixel of this first head portrait image, vVPF(x) be the variance projection function value of the gray-scale value of each pixel, in addition, this vIPF(x) can pass through formula:
vIPF ( y ) = 1 x 2 - x 1 ∫ x 1 x 2 I ( x , y ) dx
Determine; This vVPF(x) can pass through formula:
vVPF = 1 x 2 - x 1 Σ x i - x 1 x 2 [ I ( x , y ) - vIPF ( y ) ]
Determine, wherein, I(x, y) be the pixel value of the coordinate (x, y) of the pixel in this first head portrait image, x 2Be the coboundary of horizontal ordinate x, x 1Lower boundary for horizontal ordinate x;
Like this, determine respectively up-and-down boundary and the border, the left and right sides of this first eye image by broad sense horizontal projection function and broad sense vertical projection function, thereby obtained the first eye image of destination object.
Further, this acquiring unit 30 specifically is used for, and the first sclera zone proportion that presents sclera in this first eye image determines that this first eye image is template image during more than or equal to the second predetermined threshold value.
Wherein, sclera refers to the white portion of eyeball periphery, because the colouring discrimination of sclera and pupil and the colour of skin clearly, therefore, can be by the color model of training sclera, determine the first sclera zone, particularly, gather the sample of sclera, and set up Gauss's color model of sclera, in embodiments of the present invention, setting up of Gauss's color model is same as the prior art, has therefore repeated no more, like this, from the first eye image, determine the first sclera zone by above-mentioned Gauss's color model, then during more than or equal to the second predetermined threshold value, determine that this first eye image is template image, example ground at this first sclera zone proportion, when the ratio of the area of the area in this first sclera zone and the first eye image during more than or equal to the second predetermined threshold value, determine that this first eye image is template image.
Further, this processing unit 31 can mate template image and the second head portrait image, obtains matching value, and chooses image corresponding to the zone of matching value maximum as the second eye image.
Alternatively, this processing unit 31 specifically is used for, and passes through formula:
T = Σ i = 0 j = 0 i = m j = n [ A ( i , j ) - B ( i , j ) ] 2
Obtain the Euclidean distance of this second eye image and this template image; Wherein, T is the Euclidean distance of this second eye image and this template image, A (i, j) is that this template image is at the pixel value of the coordinate (i, j) of the pixel of this first head portrait image, B (i, j) be that this second eye image is at the pixel value of the coordinate (i, j) of the pixel of this second head portrait image, wherein, m is the boundary value of horizontal ordinate i, and n is the boundary value of ordinate j.
Further, this processing unit 31 also is used for, after determining the second eye image in according to the second head portrait image of this template image at this destination object, determine to present in this second eye image the area in the second sclera zone of sclera, and according to the eye state of this destination object of area definition in the area in this second sclera zone and this first sclera zone.
Wherein, the definite of this second sclera zone can determine by the color model of training sclera equally, repeats no more herein.
Alternatively, this processing unit 31 specifically is used for, and during more than or equal to the 3rd predetermined threshold value, determines that the eye state of this destination object is the state of opening eyes at the ratio of the area in the area in this second sclera zone and this first sclera zone.
During less than the 3rd predetermined threshold value, the eye state of determining this destination object is closed-eye state at the ratio of the area in the area in this second sclera zone and this first sclera zone.
Determine equipment by adopting above-mentioned eye state, this eye state determines that equipment is according to the first eye image of the first head portrait Image Acquisition destination object of destination object, and determine template image according to this first eye image, in the second head portrait image of this destination object, determine the second eye image according to this template image, and obtain the Euclidean distance of this second eye image and this template image, in definite this Euclidean distance during more than or equal to the first predetermined threshold value, delete this template image, with after obtaining next frame head portrait image, redefine template image, and determine the eye state of described destination object according to the described template image that redefines.Like this, when eye state determines that equipment is determined this Euclidean distance more than or equal to the first predetermined threshold value, then determine differing greatly of current eye image and template image, therefore, by deleting this template image, so that determining equipment, eye state in next frame head portrait image, again obtains template image, thereby avoided the mistake identification to the eye state of current goal object that causes owing to differing greatly of current eye image and template image, realized the accurate identification to the eye state of destination object.
Under those skilled in the art can be well understood to, be the convenience described and succinct, the eye state of foregoing description is determined specific works process and the description of equipment, can with reference to the corresponding process among the preceding method embodiment, not repeat them here.
The embodiment of the invention provides a kind of eye state to determine equipment 40, and as shown in Figure 4, this equipment 40 comprises:
Processor (processor) 41, communication interface (Communications Interface) 42, storer (memory) 43 and communication bus 104; Wherein, described processor 41, described communication interface 42 and described storer 43 are finished mutual communicating by letter by described communication bus 44.
Processor 41 may be a central processor CPU, or specific integrated circuit ASIC(Application Specific Integrated Circuit), or be configured to implement one or more integrated circuit of the embodiment of the invention.
Storer 43 is used for depositing program code, and described program code comprises computer-managed instruction.Storer 43 may comprise the high-speed RAM storer, also may also comprise nonvolatile memory (non-volatile memory), for example at least one magnetic disk memory.
Described communication interface 42 is used for the connection communication between these devices of realization.
Described processor 41 executive routine codes, be used for the first eye image according to the first head portrait Image Acquisition destination object of destination object, and determine template image according to this first eye image, in the second head portrait image of this destination object, determine the second eye image, and obtain the Euclidean distance of this second eye image and this template image, and in definite this Euclidean distance during more than or equal to the first predetermined threshold value, delete this template image, with after obtaining next frame head portrait image, redefine template image, and determine the eye state of this destination object according to the template image that this redefines.
Alternatively, this processor 41 specifically is used for obtaining broad sense horizontal projection function and broad sense vertical projection functions of this first head portrait image, and determines this first eye image according to this broad sense horizontal projection function and broad sense vertical projection function.
Alternatively, this processor 41 specifically is used for, and the first sclera zone proportion that presents sclera in this first eye image determines that this first eye image is template image during more than or equal to the second predetermined threshold value.
Alternatively, this processor 41 specifically is used for, and passes through formula:
T = Σ i = 0 j = 0 i = m j = n [ A ( i , j ) - B ( i , j ) ] 2
Obtain the Euclidean distance of this second eye image and this template image; Wherein, T is the Euclidean distance of this second eye image and this template image, A (i, j) is that this template image is at the pixel value of the coordinate (i, j) of the pixel of this first head portrait image, B (i, j) be that this second eye image is at the pixel value of the coordinate (i, j) of the pixel of this second head portrait image, wherein, m is the boundary value of horizontal ordinate i, and n is the boundary value of ordinate j.
Alternatively, this processor 41 also is used for, after this determines the second eye image in according to the second head portrait image of this template image at this destination object, determine to present in this second eye image the area in the second sclera zone of sclera, and according to the eye state of this destination object of area definition in the area in this second sclera zone and this first sclera zone.
Alternatively, this processor 41 specifically is used for, at the ratio of the area in the area in this second sclera zone and this first sclera zone during more than or equal to the 3rd predetermined threshold value, the eye state of determining this destination object is the state of opening eyes, during less than the 3rd predetermined threshold value, the eye state of determining this destination object is closed-eye state at the ratio of the area in the area in this second sclera zone and this first sclera zone.
The above; be the specific embodiment of the present invention only, but protection scope of the present invention is not limited to this, anyly is familiar with those skilled in the art in the technical scope that the present invention discloses; can expect easily changing or replacing, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion by described protection domain with claim.

Claims (12)

1. the method for a definite eye state is characterized in that, comprising:
According to the first eye image of the first head portrait Image Acquisition destination object of destination object, and determine template image according to described the first eye image;
In the second head portrait image of described destination object, determine the second eye image, and obtain the Euclidean distance of described the second eye image and described template image;
During more than or equal to the first predetermined threshold value, delete described template image in definite described Euclidean distance, with after obtaining next frame head portrait image, redefine template image, and determine the eye state of described destination object according to the described template image that redefines.
2. method according to claim 1 is characterized in that, the first eye image of described the first head portrait Image Acquisition destination object according to described destination object comprises:
Obtain broad sense horizontal projection function and the broad sense vertical projection function of described the first head portrait image;
Determine described the first eye image according to described broad sense horizontal projection function and broad sense vertical projection function.
3. method according to claim 1 and 2 is characterized in that, describedly determines that according to described the first eye image template image comprises:
The the first sclera zone proportion that presents sclera in described the first eye image determines that described the first eye image is template image during more than or equal to the second predetermined threshold value.
4. according to claim 1 to 3 each described methods, it is characterized in that the Euclidean distance of described the second eye image of described acquisition and described template image comprises:
Pass through formula:
T = Σ i = 0 j = 0 i = m j = n [ A ( i , j ) - B ( i , j ) ] 2
Obtain the Euclidean distance of described the second eye image and described template image; Wherein, T is the Euclidean distance of described the second eye image and described template image, A (i, j) is that described template image is at the pixel value of the coordinate (i, j) of the pixel of described the first head portrait image, B (i, j) be that described the second eye image is at the pixel value of the coordinate (i, j) of the pixel of described the second head portrait image, wherein, m is the boundary value of horizontal ordinate i, and n is the boundary value of ordinate j.
5. according to claim 3 or 4 described methods, it is characterized in that, described determine the second eye image in according to the second head portrait image of described template image at described destination object after, also comprise:
Determine to present in described the second eye image the area in the second sclera zone of sclera;
Eye state according to the described destination object of area definition in the area in described the second sclera zone and described the first sclera zone.
6. method according to claim 5 is characterized in that, the eye state of the described described destination object of area definition according to described template image and described the second sclera zone comprises:
During more than or equal to the 3rd predetermined threshold value, determine that the eye state of described destination object is the state of opening eyes at the ratio of the area in the area in described the second sclera zone and described the first sclera zone;
During less than described the 3rd predetermined threshold value, the eye state of determining described destination object is closed-eye state at the ratio of the area in the area in described the second sclera zone and described the first sclera zone.
7. an eye state is determined equipment, it is characterized in that, comprising:
Acquiring unit is used for the first eye image according to the first head portrait Image Acquisition destination object of destination object, and determines template image according to described the first eye image;
Processing unit, be used for determining the second eye image at the second head portrait image of described destination object, and obtain the Euclidean distance of the template image that described the second eye image and described acquiring unit obtain, and in definite described Euclidean distance during more than or equal to the first predetermined threshold value, delete described template image, with after obtaining next frame head portrait image, redefine template image, and determine the eye state of described destination object according to the described template image that redefines.
8. equipment according to claim 7, it is characterized in that, described acquiring unit specifically is used for, and obtains broad sense horizontal projection function and the broad sense vertical projection function of described head portrait image, and determines described the first eye image according to described broad sense horizontal projection function and broad sense vertical projection function.
9. according to claim 7 or 8 described equipment, it is characterized in that, described acquiring unit specifically is used for, and the first sclera zone proportion that presents sclera in described the first eye image determines that described the first eye image is template image during more than or equal to the second predetermined threshold value.
10. according to claim 7 to 9 each described equipment, it is characterized in that described processing unit specifically is used for, and passes through formula:
T = Σ i = 0 j = 0 i = m j = n [ A ( i , j ) - B ( i , j ) ] 2
Obtain the Euclidean distance of described the second eye image and described template image; Wherein, T is the Euclidean distance of described the second eye image and described template image, A (i, j) is that described template image is at the pixel value of the coordinate (i, j) of the pixel of described the first head portrait image, B (i, j) be that described the second eye image is at the pixel value of the coordinate (i, j) of the pixel of described the second head portrait image, wherein, m is the boundary value of horizontal ordinate i, and n is the boundary value of ordinate j.
11. according to claim 9 or 10 described equipment, it is characterized in that, described processing unit also is used for, after determining the second eye image in according to the second head portrait image of described template image at described destination object, determine to present in described the second eye image the area in the second sclera zone of sclera, and according to the eye state of the described destination object of area definition in the area in described the second sclera zone and described the first sclera zone.
12. equipment according to claim 11, it is characterized in that, described processing unit specifically is used for, and during more than or equal to the 3rd predetermined threshold value, determines that the eye state of described destination object is the state of opening eyes at the ratio of the area in the area in described the second sclera zone and described the first sclera zone;
During less than described the 3rd predetermined threshold value, the eye state of determining described destination object is closed-eye state at the ratio of the area in the area in described the second sclera zone and described the first sclera zone.
CN2013102939039A 2013-07-12 2013-07-12 Method and device for determining states of eyes Pending CN103366162A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013102939039A CN103366162A (en) 2013-07-12 2013-07-12 Method and device for determining states of eyes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013102939039A CN103366162A (en) 2013-07-12 2013-07-12 Method and device for determining states of eyes

Publications (1)

Publication Number Publication Date
CN103366162A true CN103366162A (en) 2013-10-23

Family

ID=49367468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013102939039A Pending CN103366162A (en) 2013-07-12 2013-07-12 Method and device for determining states of eyes

Country Status (1)

Country Link
CN (1) CN103366162A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106250426A (en) * 2016-07-25 2016-12-21 深圳天珑无线科技有限公司 A kind of photo processing method and terminal
CN107292261A (en) * 2017-06-16 2017-10-24 深圳天珑无线科技有限公司 A kind of photographic method and its mobile terminal
CN110070040A (en) * 2019-04-22 2019-07-30 成都品果科技有限公司 A kind of group photo screening technique and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731418A (en) * 2005-08-19 2006-02-08 清华大学 Method of robust accurate eye positioning in complicated background image
CN102034114A (en) * 2010-12-03 2011-04-27 天津工业大学 Characteristic point detection-based template matching tracing method
CN102834837A (en) * 2010-05-13 2012-12-19 虹膜技术公司 Apparatus and method for iris recognition using multiple iris templates
CN102831399A (en) * 2012-07-30 2012-12-19 华为技术有限公司 Method and device for determining eye state

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731418A (en) * 2005-08-19 2006-02-08 清华大学 Method of robust accurate eye positioning in complicated background image
CN102834837A (en) * 2010-05-13 2012-12-19 虹膜技术公司 Apparatus and method for iris recognition using multiple iris templates
CN102034114A (en) * 2010-12-03 2011-04-27 天津工业大学 Characteristic point detection-based template matching tracing method
CN102831399A (en) * 2012-07-30 2012-12-19 华为技术有限公司 Method and device for determining eye state

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈惠明: "图像欧氏距离在人脸识别中的应用研究", 《计算机工程与设计》, vol. 29, no. 14, 31 July 2008 (2008-07-31) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106250426A (en) * 2016-07-25 2016-12-21 深圳天珑无线科技有限公司 A kind of photo processing method and terminal
CN107292261A (en) * 2017-06-16 2017-10-24 深圳天珑无线科技有限公司 A kind of photographic method and its mobile terminal
CN107292261B (en) * 2017-06-16 2021-07-13 深圳天珑无线科技有限公司 Photographing method and mobile terminal thereof
CN110070040A (en) * 2019-04-22 2019-07-30 成都品果科技有限公司 A kind of group photo screening technique and device

Similar Documents

Publication Publication Date Title
RU2714096C1 (en) Method, equipment and electronic device for detecting a face vitality
US11887362B2 (en) Sky filter method for panoramic images and portable terminal
US9547908B1 (en) Feature mask determination for images
US11107254B2 (en) Calligraphy-painting device, calligraphy-painting apparatus, and auxiliary method for calligraphy painting
CN110287900B (en) Verification method and verification device
US11850747B2 (en) Action imitation method and robot and computer readable medium using the same
CN106228293A (en) teaching evaluation method and system
CN104182741A (en) Image acquisition prompt method and device and electronic device
CN110858295A (en) Traffic police gesture recognition method and device, vehicle control unit and storage medium
CN110390261B (en) Target detection method and device, computer readable storage medium and electronic equipment
CN107103613A (en) A kind of three-dimension gesture Attitude estimation method
CN102930278A (en) Human eye sight estimation method and device
CN110427849B (en) Face pose determination method and device, storage medium and electronic equipment
CN104809458A (en) Pupil center positioning method and pupil center positioning device
CN106067016B (en) A kind of facial image eyeglass detection method and device
CN108737714A (en) A kind of photographic method and device
CN103366162A (en) Method and device for determining states of eyes
WO2020093566A1 (en) Cerebral hemorrhage image processing method and device, computer device and storage medium
CN109360179A (en) A kind of image interfusion method, device and readable storage medium storing program for executing
CN108090451A (en) A kind of face identification method and system
WO2018189796A1 (en) Recognition device, recognition system, recognition method, and recognition program
CN107124560A (en) A kind of self-heterodyne system, medium and method
CN106778574A (en) For the detection method and device of facial image
CN107103271A (en) A kind of method for detecting human face
CN115423870A (en) Pupil center positioning method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20131023