CN105447466A - Kinect sensor based identity comprehensive identification method - Google Patents
Kinect sensor based identity comprehensive identification method Download PDFInfo
- Publication number
- CN105447466A CN105447466A CN201510862672.8A CN201510862672A CN105447466A CN 105447466 A CN105447466 A CN 105447466A CN 201510862672 A CN201510862672 A CN 201510862672A CN 105447466 A CN105447466 A CN 105447466A
- Authority
- CN
- China
- Prior art keywords
- accredited personnel
- represent
- personnel
- information
- overbar
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a Kinect sensor based identity comprehensive identification method. The method comprises the following steps of: 1, obtaining multiple groups of human body features of registration personnel by utilizing a Kinect sensor, wherein each group of the human body features comprises human face image information, skin color/hair color information and human body height information; 2, based on the human face image information, extracting a Haar-Like feature and obtaining a human face identification classifier result of the registration personnel through an SVM algorithm; 3, based on the skin color/hair color information, obtaining a skin color/hair color Gaussian mixture model of the registration personnel; 4, based on the human body height information, obtaining body height mean value and standard deviation of the registration personnel; 5, storing results in the steps 2, 3 and 4 into a database to finish information registration of the registration personnel; and 6, capturing human body features of current personnel by utilizing the Kinect sensor, and comparing the human body features of the current personnel with the registration information of the registration personnel in the database to determine an identity of the current personnel.
Description
Technical field
The present invention relates to a kind of personal identification method, is especially a kind ofly merge the identity integrated recognition method of height information, the colour of skin/color development information and human face image information based on Kinect sensor.
Background technology
In recent years, along with the breakthrough development of the technology such as computing machine, internet, artificial intelligence, robot just progressively enters into the normal sphere of life of order of people, provides various types of service.In the application of service type robot, the interchange allowing machine person to person carry out various information is the prerequisite realizing good service, and allow robot fast, the correct identity identifying people, as owner, client, stranger etc., thus realize differentiation and treat, be the basic guarantee that machine person to person carries out exchanging.
In artificial intelligence field, authentication can be realized by multiple technologies, as input password, brush ID card, finger print identifying, iris recognition etc.Although these technology are widely used, and uniqueness, confidentiality are better, and for the service type robot serving people, it is also inapplicable.Reason is very simple, is exactly that people more wish that can carry out nature with robot as people exchanges, instead of to obtain manipulation power by input password, the loaded down with trivial details mode such as to swipe the card.The above-mentioned identification authentication mode means that only obtain control as robot keeper or maintainer are comparatively suitable as can be seen here.
Apply more authentication means mainly based on the recognition of face of image robot field at present, it possess simply, nature, non-contacting advantage.But recognition of face also exists inherent shortcoming, on the one hand, identifying needs the cooperation of identified person, and to provide front face, and recognition effect is poor in the bad situation of light, and accuracy rate is lower; On the other hand, robot is easy to invaded person and adopts photo to cheat.Especially under the application of some household scenes, the service object of robot is mainly kinsfolk, only identifies kinsfolk by recognition of face means, just keep away unavoidable needs and frequently require that owner provides positive face to coordinate, use very inconvenient, dirigibility is poor, experiences comfort level lower.
Summary of the invention
The object of this invention is to provide a kind of identity integrated recognition method based on Kinect sensor, it has the advantage that realization is easy, recognition speed is fast, accuracy rate is high, adopt service robot of the present invention under family's application scenarios, do not need to require that owner coordinates continually, the identity that just can realize in most cases detects and identifies, practicality is stronger.
The employing of service type robot is not suitable for for solving authentication mode conventional in prior art, and the single face recognition technology based on image needs owner frequently to coordinate, when light is bad, recognition effect is poor, and the technical matters that easy invaded person adopts photo to cheat, a kind of identity integrated recognition method based on Kinect sensor provided by the invention, comprise registration process and identifying, and specifically comprise the following steps:
One, accredited personnel's multi-angle rotation face before Kinect sensor is allowed, and do different limb actions at diverse location, to obtain many groups characteristics of human body of this accredited personnel, often organize characteristics of human body and include human face image information, the colour of skin/color development information and Human Height information;
Two, based on many groups human face image information of this accredited personnel, extract Haar-Like feature and train separately face recognition classifier by SVM algorithm, to obtain the recognition of face classifier result of this accredited personnel;
Three, based on the many groups colour of skin/color development information of this accredited personnel, by the colour of skin/color development mixed Gauss model of accumulative this accredited personnel of acquisition;
Four, based on many groups Human Height information of this accredited personnel, by calculating the height mean value and standard deviation that obtain this accredited personnel;
Five, result step 2, step 3 and step 4 obtained to complete the information registering of this accredited personnel, and completes the information registering of all accredited personnel stored in database according to the logon mode of this accredited personnel;
Six, after having registered, utilize Kinect sensor to catch the characteristics of human body of current persons, the log-on message of accredited personnel in the characteristics of human body of current persons and database is made comparisons, and determine the identity of current persons according to comparative result.
Further, a kind of identity integrated recognition method based on Kinect sensor of the present invention, wherein, in step one, the human face image information of described acquisition accredited personnel realizes by following concrete mode:
(1) Kinect sensor collection is utilized to comprise depth image and the coloured image of accredited personnel, and according to the human skeleton articulation point information of the degree of depth data reduction accredited personnel in depth image, wherein,
Torso portion comprises the crown, lower jaw, chest, belly, hip, represents successively with C1, C2, C3, C4, C5;
Left-hand part comprises left hand finger tip, left finesse, left elbow joint, left shoulder joint, represents successively with L1, L2, L3, L4;
Right hand portion comprises right hand finger tip, right finesse, right elbow joint, right shoulder joint, represents successively with R1, R2, R3, R4;
Left leg section comprises left foot point, left foot wrist, left knee joint, left hip joint, represents successively with E1, E2, E3, E4;
Right leg section comprises right crus of diaphragm point, right crus of diaphragm wrist, right knee joint, right hip joint, represents successively with F1, F2, F3, F4;
(2) with the line of C1 and C2 two articulation points in the human skeleton of accredited personnel for axis, adopt human body segmentation method to extract human body head region in coloured image, as human body head image;
(3) adopt face recognition algorithms to judge whether human body head image comprises face, if comprise face, capture facial image, as the human face image information of accredited personnel, otherwise not think and comprise face.
Further, a kind of identity integrated recognition method based on Kinect sensor of the present invention, wherein, in step one, the colour of skin/color development information of described acquisition accredited personnel realizes by following concrete mode:
(1) be YCbCr colour gamut by the human body head image of accredited personnel from RGB color gamut conversion, and judge whether its CbCr chrominance component belongs to basic skin distribution U (Cb for each pixel in human body head image, Cr), if belonged to, be labeled as 1, if do not belonged to, be labeled as 0;
(2) according to judgement and the mark result of step (), using all pixels being labeled as 1 as a set, and calculate the average of CbCr chrominance component and covariance matrix corresponding to CbCr, as colour of skin list Gauss model, wherein, CbCr chrominance component average is used
represent, covariance matrix σ
1 2represent, colour of skin list Gauss model N
1(μ
1, σ
1 2) represent;
(3) according to judgement and the mark result of step (), using all pixels being labeled as 0 as a set, and calculate the average of CbCr chrominance component and covariance matrix corresponding to CbCr, as color development list Gauss model, wherein, CbCr chrominance component average is used
represent, covariance matrix variance σ
2 2represent, color development list Gauss model N
2(μ
2, σ
2 2) represent.
Further, a kind of identity integrated recognition method based on Kinect sensor of the present invention, wherein, in step 3, the described colour of skin/color development mixed Gauss model is N=(μ
1, σ
1 2, μ
2, σ
2 2).
Further, a kind of identity integrated recognition method based on Kinect sensor of the present invention, wherein, in step one, the Human Height information of described acquisition accredited personnel realizes by following concrete mode:
(1) articulation point in human skeleton is divided into five groups, the 1st group is (C1, C2, C3, C4, C5), the 2nd group is (L1, L2, L3, L4), the 3rd group is (R1, R2, R3, R4), the 4th group is (E1, E2, E3, E4), the 5th group is (F1, F2, F3, F4);
(2) least square fitting three dimensions straight line is adopted respectively to each group of articulation point set, and calculate respective fitting a straight line error, be designated as Δ 1, Δ 2, Δ 3, Δ 4, Δ 5 respectively;
(3) when all error delta 1, Δ 2, Δ 3, Δ 4, Δ 5 are all less than setting threshold value T, then think that human body is in the straight configuration in each joint, and calculate the Human Height represented with H as follows;
α=(Δ
4+Δ
5)/(Δ
1+Δ
2+Δ
3+Δ
4+Δ
5)(6)
H=α(H
1+max(H
2,H
3))+2(1-α)max(A
1+A
2)(7)
At above-mentioned formula (1) in (5),
represent the three dimensions distance between two joint point C1 and C2;
represent the three dimensions distance between two joint point C2 and C3;
represent the three dimensions distance between two joint point C3 and C4;
represent the three dimensions distance between two joint point C4 and C5;
represent the three dimensions distance between two joint point E1 and E2;
represent the three dimensions distance between two joint point E2 and E3;
represent the three dimensions distance between two joint point E3 and E4;
represent the three dimensions distance between two joint point F1 and F2;
represent the three dimensions distance between two joint point F2 and F3;
represent the three dimensions distance between two joint point F3 and F4;
represent the three dimensions distance between two joint point L1 and L2;
represent the three dimensions distance between two joint point L2 and L3;
represent the three dimensions distance between two joint point L3 and L4;
represent the three dimensions distance between two joint point L4 and C3;
represent the three dimensions distance between two joint point R1 and R2;
represent the three dimensions distance between two joint point R2 and R3;
represent the three dimensions distance between two joint point R3 and R4;
represent the three dimensions distance between two joint point R4 and C3.
Further, a kind of identity integrated recognition method based on Kinect sensor of the present invention, wherein, in step 6, the described characteristics of human body utilizing Kinect sensor to catch current persons, the log-on message of accredited personnel in this characteristics of human body and database is made comparisons the identity determining current persons, specifically comprises the following steps:
(1) Kinect sensor is utilized to obtain Human Height information and the colour of skin/color development information of current persons;
(2) according to the Human Height information of current persons by calculating and obtaining the height of current persons, in Query Database accredited personnel log-on message and travel through [h-3 Δ h corresponding to each accredited personnel, h+3 Δ h] scope, judge whether to there is the accredited personnel matched with current persons's height, if exist and have uniqueness, direct accredited personnel current persons being identified as correspondence, terminates to identify and Output rusults; If exist but not there is uniqueness, carry out following 3rd step; The accredited personnel of if there is no mating then carries out following 4th step; Wherein h represents the height mean value of accredited personnel, and Δ h represents standard deviation;
(3) according to the colour of skin/color development information of current persons, obtain the colour of skin/color development mixed Gauss model of current persons, according to there is the accredited personnel but not unique condition that match with current persons's height on the basis of step (two), determine candidate accredited personnel scope and judge whether to exist the accredited personnel with the colour of skin/hair color model unique match of current persons, if the accredited personnel of existence anduniquess coupling, is identified as this accredited personnel by current persons, terminate to identify and Output rusults; If there is no the accredited personnel of unique match then progressive following 4th step;
(4) send phonetic order, require current persons by front face towards Kinect sensor, and obtain the human face image information of current persons;
(5) according to the human face image information of current persons, accredited personnel's information in Query Database also judges whether the accredited personnel that there is coupling, if existed, current persons is identified as corresponding accredited personnel, terminate to identify and Output rusults, current persons is then identified as strange personnel or requires that current persons re-registers by the accredited personnel of if there is no mating;
Wherein, utilize the Human Height information of Kinect sensor acquisition current persons, the colour of skin/color development information identical with implementation during registration with the implementation of human face image information.
A kind of identity integrated recognition method based on Kinect sensor of the present invention compared with prior art, have the following advantages: identity integrated recognition method provided by the invention is based on Microsoft's Kinect sensor, Kinect adopts active infrared line technology to carry out depth finding, effectively can avoid the impact of illumination condition and shelter, can coloured image in Real-time Obtaining photographed scene and depth image, the coloured image that the present invention takes according to Kinect sensor and depth image extract the human face image information of human body, the colour of skin/color development information and height information, and in registration and identifying, merge in conjunction with much information breath and judge, effectively can promote the accuracy rate of identification, have and implement easily, unsophisticated, noncontact, without the need to wearing the advantage of foreign object, particularly under family's application scenarios, adopt the robot of the inventive method without the need to requiring that owner coordinates continually, just more conveniently can identify different kinsfolks fast, practicality is stronger.
Below in conjunction with accompanying drawing illustrated embodiment, a kind of identity integrated recognition method based on Kinect sensor of the present invention is described in further detail:
Accompanying drawing explanation
Fig. 1 is the human skeleton schematic diagram of a kind of identity integrated recognition method based on Kinect sensor of the present invention;
Fig. 2 is the identification process flow diagram of a kind of identity integrated recognition method based on Kinect sensor of the present invention.
Embodiment
Under home environment, identify the application of different home member below with the robot carrying Kinect sensor, the instantiation mode as a kind of identity integrated recognition method based on Kinect sensor of the present invention is specifically described.First it should be noted that, the Kinect sensor of Microsoft can Real-time Collection depth image and coloured image, and by the SDK that Kinect carries, effectively can obtain its human body target within the vision and identify skeleton.
A kind of identity integrated recognition method based on Kinect sensor of the present invention, comprises registration and identifies two processes, and specifically comprising the following steps:
One, accredited personnel's multi-angle rotation face before Kinect sensor is allowed, and do different limb actions at diverse location, to obtain many groups characteristics of human body of this accredited personnel, often organize characteristics of human body and include human face image information, the colour of skin/color development information and Human Height information;
Two, based on many groups human face image information of this accredited personnel, extract Haar-Like feature and train separately face recognition classifier by SVM algorithm, to obtain the recognition of face classifier result of this accredited personnel;
Three, based on the many groups colour of skin/color development information of this accredited personnel, by the colour of skin/color development mixed Gauss model of accumulative this accredited personnel of acquisition;
Four, based on many groups Human Height information of this accredited personnel, by calculating the height mean value and standard deviation that obtain this accredited personnel;
Five, result step 2, step 3 and step 4 obtained to complete the information registering of this accredited personnel, and completes the information registering of all accredited personnel stored in database according to the logon mode of this accredited personnel;
Six, after having registered, utilize Kinect sensor to catch the characteristics of human body of current persons, the log-on message of accredited personnel in the characteristics of human body of current persons and database is made comparisons, and determine the identity of current persons according to comparative result.
In above-mentioned steps one, the human face image information of described acquisition accredited personnel, realizes by following concrete mode:
(1) Kinect sensor collection is utilized to comprise depth image and the coloured image of accredited personnel, and according to the human skeleton articulation point information of the degree of depth data reduction accredited personnel in depth image, as shown in Figure 1, wherein,
Torso portion comprises the crown, lower jaw, chest, belly, hip, represents successively with C1, C2, C3, C4, C5;
Left-hand part comprises left hand finger tip, left finesse, left elbow joint, left shoulder joint, represents successively with L1, L2, L3, L4;
Right hand portion comprises right hand finger tip, right finesse, right elbow joint, right shoulder joint, represents successively with R1, R2, R3, R4;
Left leg section comprises left foot point, left foot wrist, left knee joint, left hip joint, represents successively with E1, E2, E3, E4;
Right leg section comprises right crus of diaphragm point, right crus of diaphragm wrist, right knee joint, right hip joint, represents successively with F1, F2, F3, F4;
(2) with the line of C1 and C2 two articulation points in the human skeleton of accredited personnel for axis, adopt human body segmentation method to extract human body head region in coloured image, as human body head image;
(3) adopt face recognition algorithms to judge whether human body head image comprises face, if comprise face, capture facial image, as the human face image information of accredited personnel, otherwise not think and comprise face.
In above-mentioned steps one, the colour of skin/color development information of described acquisition accredited personnel, realizes by following concrete mode:
(1) be YCbCr colour gamut by the human body head image of accredited personnel from RGB color gamut conversion, and judge whether its CbCr chrominance component belongs to basic skin distribution U (Cb for each pixel in human body head image, Cr), if belonged to, be labeled as 1, if do not belonged to, be labeled as 0;
(2) according to judgement and the mark result of step (), using all pixels being labeled as 1 as a set, and calculate the average of CbCr chrominance component and covariance matrix corresponding to CbCr, as colour of skin list Gauss model, wherein, CbCr chrominance component average is used
represent, covariance matrix σ
1 2represent, colour of skin list Gauss model N
1(μ
1, σ
1 2) represent;
(3) according to judgement and the mark result of step (), using all pixels being labeled as 0 as a set, and calculate the average of CbCr chrominance component and covariance matrix corresponding to CbCr, as color development list Gauss model, wherein, CbCr chrominance component average is used
represent, covariance matrix σ
2 2represent, color development list Gauss model N
2(μ
2, σ
2 2) represent.
In above-mentioned steps three, the described colour of skin/color development mixed Gauss model is N=(μ
1, σ
1 2, μ
2, σ
2 2).
In above-mentioned steps one, the Human Height information of described acquisition accredited personnel, realizes by following concrete mode:
(1) articulation point in human skeleton is divided into five groups, the 1st group is (C1, C2, C3, C4, C5), the 2nd group is (L1, L2, L3, L4), the 3rd group is (R1, R2, R3, R4), the 4th group is (E1, E2, E3, E4), the 5th group is (F1, F2, F3, F4);
(2) least square fitting three dimensions straight line is adopted respectively to each group of articulation point set, and calculate respective fitting a straight line error, be designated as Δ 1, Δ 2, Δ 3, Δ 4, Δ 5 respectively;
(3) when all error delta 1, Δ 2, Δ 3, Δ 4, Δ 5 are all less than setting threshold value T, then think that human body is in the straight configuration in each joint, and calculate the Human Height represented with H as follows;
α=(Δ
4+Δ
5)/(Δ
1+Δ
2+Δ
3+Δ
4+Δ
5)(6)
H=α(H
1+max(H
2,H
3))+2(1-α)max(A
1+A
2)(7)
At formula (1) in (5),
represent the three dimensions distance between two joint point C1 and C2;
represent the three dimensions distance between two joint point C2 and C3;
represent the three dimensions distance between two joint point C3 and C4;
represent the three dimensions distance between two joint point C4 and C5;
represent the three dimensions distance between two joint point E1 and E2;
represent the three dimensions distance between two joint point E2 and E3;
represent the three dimensions distance between two joint point E3 and E4;
represent the three dimensions distance between two joint point F1 and F2;
represent the three dimensions distance between two joint point F2 and F3;
represent the three dimensions distance between two joint point F3 and F4;
represent the three dimensions distance between two joint point L1 and L2;
represent the three dimensions distance between two joint point L2 and L3;
represent the three dimensions distance between two joint point L3 and L4;
represent the three dimensions distance between two joint point L4 and C3;
represent the three dimensions distance between two joint point R1 and R2;
represent the three dimensions distance between two joint point R2 and R3;
represent the three dimensions distance between two joint point R3 and R4;
represent the three dimensions distance between two joint point R4 and C3.
In above-mentioned steps six, the described characteristics of human body utilizing Kinect sensor to catch current persons, the log-on message of accredited personnel in this characteristics of human body and database is made comparisons the identity determining current persons, and process flow diagram as shown in Figure 2, specifically comprises the following steps:
(1) Kinect sensor is utilized to obtain Human Height information and the colour of skin/color development information of current persons;
(2) according to the Human Height information of current persons by calculating and obtaining the height of current persons, in Query Database accredited personnel log-on message and travel through [h-3 Δ h corresponding to each accredited personnel, h+3 Δ h] scope, judge whether to there is the accredited personnel matched with current persons's height, if exist and have uniqueness, direct accredited personnel current persons being identified as correspondence, terminates to identify and Output rusults; If exist but not there is uniqueness, carry out following 3rd step; The accredited personnel of if there is no mating then carries out following 4th step; Wherein h represents the height mean value of accredited personnel, and Δ h represents standard deviation;
(3) according to the colour of skin/color development information of current persons, obtain the colour of skin/color development mixed Gauss model of current persons, according to there is the accredited personnel but not unique condition that match with current persons's height on the basis of step (two), determine candidate registrant's scope and judge whether to exist the accredited personnel with the colour of skin/hair color model unique match of current persons, if the accredited personnel of existence anduniquess coupling, is identified as this accredited personnel by current persons, terminate to identify and Output rusults; If there is no the accredited personnel of unique match then progressive following 4th step;
(4) send phonetic order, require current persons by front face towards Kinect sensor, and obtain the human face image information of current persons;
(5) according to the human face image information of current persons, accredited personnel's information in Query Database also judges whether the accredited personnel that there is coupling, if existed, current persons is identified as corresponding accredited personnel, terminate to identify and Output rusults, current persons is then identified as strange personnel or requires that current persons re-registers by the accredited personnel of if there is no mating;
It should be noted that, utilize Kinect sensor to obtain the implementation of the Human Height information of current persons, the colour of skin/color development information and human face image information, obtain the body height information of accredited personnel with utilizing Kinect sensor during registration, the colour of skin/color development information is identical with the implementation of human face image information.
Above embodiment is only the description carried out the preferred embodiment of the present invention; the restriction not request protection domain of the present invention carried out; under the prerequisite not departing from design concept of the present invention and spirit; the various forms of distortion that this area engineering technical personnel make according to technical scheme of the present invention, all should fall in the determined protection domain of claims of the present invention.
Claims (6)
1. based on an identity integrated recognition method for Kinect sensor, it is characterized in that, comprise registration process and identifying, and specifically comprise the following steps:
One, accredited personnel's multi-angle rotation face before Kinect sensor is allowed, and do different limb actions at diverse location, to obtain many groups characteristics of human body of this accredited personnel, often organize characteristics of human body and include human face image information, the colour of skin/color development information and Human Height information;
Two, based on many groups human face image information of this accredited personnel, extract Haar-Like feature and train separately face recognition classifier by SVM algorithm, to obtain the recognition of face classifier result of this accredited personnel;
Three, based on the many groups colour of skin/color development information of this accredited personnel, by the colour of skin/color development mixed Gauss model of accumulative this accredited personnel of acquisition;
Four, based on many groups Human Height information of this accredited personnel, by calculating the height mean value and standard deviation that obtain this accredited personnel;
Five, result step 2, step 3 and step 4 obtained to complete the information registering of this accredited personnel, and completes the information registering of all accredited personnel stored in database according to the logon mode of this accredited personnel;
Six, after having registered, utilize Kinect sensor to catch the characteristics of human body of current persons, the log-on message of accredited personnel in the characteristics of human body of current persons and database is made comparisons, and determine the identity of current persons according to comparative result.
2. a kind of identity integrated recognition method based on Kinect sensor according to claim 1, is characterized in that: in step one, and the human face image information of described acquisition accredited personnel realizes by following concrete mode:
(1) Kinect sensor collection is utilized to comprise depth image and the coloured image of accredited personnel, and according to the human skeleton articulation point information of the degree of depth data reduction accredited personnel in depth image, wherein,
Torso portion comprises the crown, lower jaw, chest, belly, hip, represents successively with C1, C2, C3, C4, C5;
Left-hand part comprises left hand finger tip, left finesse, left elbow joint, left shoulder joint, represents successively with L1, L2, L3, L4;
Right hand portion comprises right hand finger tip, right finesse, right elbow joint, right shoulder joint, represents successively with R1, R2, R3, R4;
Left leg section comprises left foot point, left foot wrist, left knee joint, left hip joint, represents successively with E1, E2, E3, E4;
Right leg section comprises right crus of diaphragm point, right crus of diaphragm wrist, right knee joint, right hip joint, represents successively with F1, F2, F3, F4;
(2) with the line of C1 and C2 two articulation points in the human skeleton of accredited personnel for axis, adopt human body segmentation method to extract human body head region in coloured image, as human body head image;
(3) adopt face recognition algorithms to judge whether human body head image comprises face, if comprise face, capture facial image, as the human face image information of accredited personnel, otherwise not think and comprise face.
3. a kind of identity integrated recognition method based on Kinect sensor according to claim 2, is characterized in that: in step one, and the colour of skin/color development information of described acquisition accredited personnel realizes by following concrete mode:
(1) be YCbCr colour gamut by the human body head image of accredited personnel from RGB color gamut conversion, and judge whether its CbCr chrominance component belongs to basic skin distribution U (Cb for each pixel in human body head image, Cr), if belonged to, be labeled as 1, if do not belonged to, be labeled as 0;
(2) according to judgement and the mark result of step (), using all pixels being labeled as 1 as a set, and calculate the average of CbCr chrominance component and covariance matrix corresponding to CbCr, as colour of skin list Gauss model, wherein, CbCr chrominance component average is used
represent, covariance matrix σ
1 2represent, colour of skin list Gauss model N
1(μ
1, σ
1 2) represent;
(3) according to judgement and the mark result of step (), using all pixels being labeled as 0 as a set, and calculate the average of CbCr chrominance component and covariance matrix corresponding to CbCr, as color development list Gauss model, wherein, CbCr chrominance component average is used
represent, covariance matrix σ
2 2represent, color development list Gauss model N
2(μ
2, σ
2 2) represent.
4. a kind of identity integrated recognition method based on Kinect sensor according to claim 3, it is characterized in that: in step 3, the described colour of skin/color development mixed Gauss model is N=(μ
1, σ
1 2, μ
2, σ
2 2).
5. a kind of identity integrated recognition method based on Kinect sensor according to claim 4, is characterized in that: in step one, and the Human Height information of described acquisition accredited personnel realizes by following concrete mode:
(1) articulation point in human skeleton is divided into five groups, the 1st group is (C1, C2, C3, C4, C5), the 2nd group is (L1, L2, L3, L4), the 3rd group is (R1, R2, R3, R4), the 4th group is (E1, E2, E3, E4), the 5th group is (F1, F2, F3, F4);
(2) least square fitting three dimensions straight line is adopted respectively to each group of articulation point set, and calculate respective fitting a straight line error, be designated as Δ 1, Δ 2, Δ 3, Δ 4, Δ 5 respectively;
(3) when all error delta 1, Δ 2, Δ 3, Δ 4, Δ 5 are all less than setting threshold value T1, then think that human body is in the straight configuration in each joint, and calculate the Human Height represented with H as follows;
α=(Δ
4+Δ
5)/(Δ
1+Δ
2+Δ
3+Δ
4+Δ
5)(6)
H=α(H
1+max(H
2,H
3))+2(1-α)max(A
1+A
2)(7)
At above-mentioned formula (1) in (5),
represent the three dimensions distance between two joint point C1 and C2;
represent the three dimensions distance between two joint point C2 and C3;
represent the three dimensions distance between two joint point C3 and C4;
represent the three dimensions distance between two joint point C4 and C5;
represent the three dimensions distance between two joint point E1 and E2;
represent the three dimensions distance between two joint point E2 and E3;
represent the three dimensions distance between two joint point E3 and E4;
represent the three dimensions distance between two joint point F1 and F2;
represent the three dimensions distance between two joint point F2 and F3;
represent the three dimensions distance between two joint point F3 and F4;
represent the three dimensions distance between two joint point L1 and L2;
represent the three dimensions distance between two joint point L2 and L3;
represent the three dimensions distance between two joint point L3 and L4;
represent the three dimensions distance between two joint point L4 and C3;
represent the three dimensions distance between two joint point R1 and R2;
represent the three dimensions distance between two joint point R2 and R3;
represent the three dimensions distance between two joint point R3 and R4;
represent the three dimensions distance between two joint point R4 and C3.
6. a kind of identity integrated recognition method based on Kinect sensor according to claim 5, it is characterized in that: in step 6, the described characteristics of human body utilizing Kinect sensor to catch current persons, the log-on message of accredited personnel in this characteristics of human body and database is made comparisons the identity determining current persons, specifically comprises the following steps:
(1) Kinect sensor is utilized to obtain Human Height information and the colour of skin/color development information of current persons;
(2) according to the Human Height information of current persons by calculating and obtaining the height of current persons, in Query Database accredited personnel log-on message and travel through [h-3 Δ h corresponding to each accredited personnel, h+3 Δ h] scope, judge whether to there is the accredited personnel matched with current persons's height, if exist and have uniqueness, direct accredited personnel current persons being identified as correspondence, terminates to identify and Output rusults; If exist but not there is uniqueness, carry out following 3rd step; The accredited personnel of if there is no mating then carries out following 4th step; Wherein h represents the height mean value of accredited personnel, and Δ h represents standard deviation;
(3) according to the colour of skin/color development information of current persons, obtain the colour of skin/color development mixed Gauss model of current persons, according to there is the accredited personnel but not unique condition that match with current persons's height on the basis of step (two), determine candidate accredited personnel scope and judge whether to exist the accredited personnel with the colour of skin/hair color model unique match of current persons, if the accredited personnel of existence anduniquess coupling, is identified as this accredited personnel by current persons, terminate to identify and Output rusults; If there is no the accredited personnel of unique match then progressive following 4th step;
(4) send phonetic order, require current persons by front face towards Kinect sensor, and obtain the human face image information of current persons;
(5) according to the human face image information of current persons, accredited personnel's information in Query Database also judges whether the accredited personnel that there is coupling, if existed, current persons is identified as corresponding accredited personnel, terminate to identify and Output rusults, current persons is then identified as strange personnel or requires that current persons re-registers by the accredited personnel of if there is no mating;
Wherein, utilize the Human Height information of Kinect sensor acquisition current persons, the colour of skin/color development information identical with implementation during registration with the implementation of human face image information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510862672.8A CN105447466B (en) | 2015-12-01 | 2015-12-01 | A kind of identity integrated recognition method based on Kinect sensor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510862672.8A CN105447466B (en) | 2015-12-01 | 2015-12-01 | A kind of identity integrated recognition method based on Kinect sensor |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105447466A true CN105447466A (en) | 2016-03-30 |
CN105447466B CN105447466B (en) | 2019-07-23 |
Family
ID=55557626
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510862672.8A Active CN105447466B (en) | 2015-12-01 | 2015-12-01 | A kind of identity integrated recognition method based on Kinect sensor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105447466B (en) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105773633A (en) * | 2016-04-14 | 2016-07-20 | 中南大学 | Mobile robot man-machine control system based on face location and flexibility parameters |
CN106599785A (en) * | 2016-11-14 | 2017-04-26 | 深圳奥比中光科技有限公司 | Method and device for building human body 3D feature identity information database |
CN106652291A (en) * | 2016-12-09 | 2017-05-10 | 华南理工大学 | Indoor simple monitoring and alarming system and method based on Kinect |
CN106778615A (en) * | 2016-12-16 | 2017-05-31 | 中新智擎有限公司 | A kind of method of identifying user identity, device and service for infrastructure robot |
CN106934377A (en) * | 2017-03-14 | 2017-07-07 | 深圳大图科创技术开发有限公司 | A kind of improved face detection system |
CN107192342A (en) * | 2017-05-11 | 2017-09-22 | 广州帕克西软件开发有限公司 | A kind of measuring method and system of contactless build data |
CN107292252A (en) * | 2017-06-09 | 2017-10-24 | 南京华捷艾米软件科技有限公司 | A kind of personal identification method of autonomous learning |
CN107392083A (en) * | 2016-04-28 | 2017-11-24 | 松下知识产权经营株式会社 | Identification device, recognition methods, recognizer and recording medium |
CN108451534A (en) * | 2018-01-26 | 2018-08-28 | 仰人杰 | A kind of human motion detecting method based on dielectric elastomeric body sensor |
CN108629261A (en) * | 2017-03-24 | 2018-10-09 | 纬创资通股份有限公司 | Remote identity recognition method and system and computer readable recording medium |
CN108734083A (en) * | 2018-03-21 | 2018-11-02 | 北京猎户星空科技有限公司 | Control method, device, equipment and the storage medium of smart machine |
CN108960078A (en) * | 2018-06-12 | 2018-12-07 | 温州大学 | A method of based on monocular vision, from action recognition identity |
CN109272347A (en) * | 2018-08-16 | 2019-01-25 | 苏宁易购集团股份有限公司 | A kind of statistical analysis technique and system of shops's volume of the flow of passengers |
CN109426785A (en) * | 2017-08-31 | 2019-03-05 | 杭州海康威视数字技术股份有限公司 | A kind of human body target personal identification method and device |
CN109426787A (en) * | 2017-08-31 | 2019-03-05 | 杭州海康威视数字技术股份有限公司 | A kind of human body target track determines method and device |
CN110503022A (en) * | 2019-08-19 | 2019-11-26 | 北京积加科技有限公司 | A kind of personal identification method, apparatus and system |
CN110524559A (en) * | 2019-08-30 | 2019-12-03 | 成都未至科技有限公司 | Intelligent human-machine interaction system and method based on human behavior data |
CN110594856A (en) * | 2019-08-12 | 2019-12-20 | 青岛经济技术开发区海尔热水器有限公司 | Hot water circulation control method and hot water system |
CN111064925A (en) * | 2019-12-04 | 2020-04-24 | 常州工业职业技术学院 | Subway passenger ticket evasion behavior detection method and system |
CN111079644A (en) * | 2019-12-13 | 2020-04-28 | 四川新网银行股份有限公司 | Method for recognizing external force to assist photographing based on distance and joint point and storage medium |
CN111199198A (en) * | 2019-12-27 | 2020-05-26 | 深圳市优必选科技股份有限公司 | Image target positioning method, image target positioning device and mobile robot |
CN111292087A (en) * | 2020-01-20 | 2020-06-16 | 北京沃东天骏信息技术有限公司 | Identity verification method and device, computer readable medium and electronic equipment |
CN114582003A (en) * | 2022-04-24 | 2022-06-03 | 慕思健康睡眠股份有限公司 | Sleep health management system based on cloud computing service |
CN115637901A (en) * | 2022-10-09 | 2023-01-24 | 东风汽车集团股份有限公司 | Child lock control system, method and equipment based on OMS |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103529944A (en) * | 2013-10-17 | 2014-01-22 | 合肥金诺数码科技股份有限公司 | Human body movement identification method based on Kinect |
CN103606093A (en) * | 2013-10-28 | 2014-02-26 | 燕山大学 | Intelligent chain VIP customer service system based on human characteristics |
CN104167016A (en) * | 2014-06-16 | 2014-11-26 | 西安工业大学 | Three-dimensional motion reconstruction method based on RGB color and depth image |
CN104766230A (en) * | 2015-04-21 | 2015-07-08 | 东华大学 | Advertising effect evaluation method based on human skeletal tracking |
-
2015
- 2015-12-01 CN CN201510862672.8A patent/CN105447466B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103529944A (en) * | 2013-10-17 | 2014-01-22 | 合肥金诺数码科技股份有限公司 | Human body movement identification method based on Kinect |
CN103606093A (en) * | 2013-10-28 | 2014-02-26 | 燕山大学 | Intelligent chain VIP customer service system based on human characteristics |
CN104167016A (en) * | 2014-06-16 | 2014-11-26 | 西安工业大学 | Three-dimensional motion reconstruction method based on RGB color and depth image |
CN104766230A (en) * | 2015-04-21 | 2015-07-08 | 东华大学 | Advertising effect evaluation method based on human skeletal tracking |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105773633A (en) * | 2016-04-14 | 2016-07-20 | 中南大学 | Mobile robot man-machine control system based on face location and flexibility parameters |
CN107392083B (en) * | 2016-04-28 | 2022-05-10 | 松下知识产权经营株式会社 | Identification device, identification method, and recording medium |
CN107392083A (en) * | 2016-04-28 | 2017-11-24 | 松下知识产权经营株式会社 | Identification device, recognition methods, recognizer and recording medium |
CN106599785A (en) * | 2016-11-14 | 2017-04-26 | 深圳奥比中光科技有限公司 | Method and device for building human body 3D feature identity information database |
CN106599785B (en) * | 2016-11-14 | 2020-06-30 | 深圳奥比中光科技有限公司 | Method and equipment for establishing human body 3D characteristic identity information base |
CN106652291A (en) * | 2016-12-09 | 2017-05-10 | 华南理工大学 | Indoor simple monitoring and alarming system and method based on Kinect |
CN106778615A (en) * | 2016-12-16 | 2017-05-31 | 中新智擎有限公司 | A kind of method of identifying user identity, device and service for infrastructure robot |
CN106778615B (en) * | 2016-12-16 | 2019-10-18 | 中新智擎科技有限公司 | A kind of method, apparatus and service for infrastructure robot identifying user identity |
CN106934377A (en) * | 2017-03-14 | 2017-07-07 | 深圳大图科创技术开发有限公司 | A kind of improved face detection system |
CN106934377B (en) * | 2017-03-14 | 2020-03-17 | 新疆智辰天林信息科技有限公司 | Improved human face detection system |
CN108629261A (en) * | 2017-03-24 | 2018-10-09 | 纬创资通股份有限公司 | Remote identity recognition method and system and computer readable recording medium |
CN107192342A (en) * | 2017-05-11 | 2017-09-22 | 广州帕克西软件开发有限公司 | A kind of measuring method and system of contactless build data |
CN107292252A (en) * | 2017-06-09 | 2017-10-24 | 南京华捷艾米软件科技有限公司 | A kind of personal identification method of autonomous learning |
US11157720B2 (en) | 2017-08-31 | 2021-10-26 | Hangzhou Hikvision Digital Technology Co., Ltd. | Method and device for determining path of human target |
US11126828B2 (en) | 2017-08-31 | 2021-09-21 | Hangzhou Hikvision Digital Technology Co., Ltd. | Method and device for recognizing identity of human target |
CN109426785A (en) * | 2017-08-31 | 2019-03-05 | 杭州海康威视数字技术股份有限公司 | A kind of human body target personal identification method and device |
CN109426787A (en) * | 2017-08-31 | 2019-03-05 | 杭州海康威视数字技术股份有限公司 | A kind of human body target track determines method and device |
CN109426785B (en) * | 2017-08-31 | 2021-09-10 | 杭州海康威视数字技术股份有限公司 | Human body target identity recognition method and device |
CN108451534A (en) * | 2018-01-26 | 2018-08-28 | 仰人杰 | A kind of human motion detecting method based on dielectric elastomeric body sensor |
CN108451534B (en) * | 2018-01-26 | 2021-08-27 | 仰人杰 | Human body motion detection method based on dielectric elastomer sensor |
CN108734083A (en) * | 2018-03-21 | 2018-11-02 | 北京猎户星空科技有限公司 | Control method, device, equipment and the storage medium of smart machine |
CN108960078A (en) * | 2018-06-12 | 2018-12-07 | 温州大学 | A method of based on monocular vision, from action recognition identity |
CN109272347A (en) * | 2018-08-16 | 2019-01-25 | 苏宁易购集团股份有限公司 | A kind of statistical analysis technique and system of shops's volume of the flow of passengers |
CN110594856A (en) * | 2019-08-12 | 2019-12-20 | 青岛经济技术开发区海尔热水器有限公司 | Hot water circulation control method and hot water system |
CN110503022A (en) * | 2019-08-19 | 2019-11-26 | 北京积加科技有限公司 | A kind of personal identification method, apparatus and system |
CN110524559A (en) * | 2019-08-30 | 2019-12-03 | 成都未至科技有限公司 | Intelligent human-machine interaction system and method based on human behavior data |
CN110524559B (en) * | 2019-08-30 | 2022-06-10 | 成都未至科技有限公司 | Intelligent man-machine interaction system and method based on personnel behavior data |
CN111064925A (en) * | 2019-12-04 | 2020-04-24 | 常州工业职业技术学院 | Subway passenger ticket evasion behavior detection method and system |
CN111079644A (en) * | 2019-12-13 | 2020-04-28 | 四川新网银行股份有限公司 | Method for recognizing external force to assist photographing based on distance and joint point and storage medium |
CN111199198A (en) * | 2019-12-27 | 2020-05-26 | 深圳市优必选科技股份有限公司 | Image target positioning method, image target positioning device and mobile robot |
CN111199198B (en) * | 2019-12-27 | 2023-08-04 | 深圳市优必选科技股份有限公司 | Image target positioning method, image target positioning device and mobile robot |
CN111292087A (en) * | 2020-01-20 | 2020-06-16 | 北京沃东天骏信息技术有限公司 | Identity verification method and device, computer readable medium and electronic equipment |
CN114582003A (en) * | 2022-04-24 | 2022-06-03 | 慕思健康睡眠股份有限公司 | Sleep health management system based on cloud computing service |
CN115637901A (en) * | 2022-10-09 | 2023-01-24 | 东风汽车集团股份有限公司 | Child lock control system, method and equipment based on OMS |
Also Published As
Publication number | Publication date |
---|---|
CN105447466B (en) | 2019-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105447466A (en) | Kinect sensor based identity comprehensive identification method | |
Lin et al. | Matching contactless and contact-based conventional fingerprint images for biometrics identification | |
CN108921100B (en) | Face recognition method and system based on visible light image and infrared image fusion | |
CN105574518B (en) | Method and device for detecting living human face | |
Feng et al. | Multi-cues eye detection on gray intensity image | |
Yan et al. | Biometric recognition using 3D ear shape | |
Kanhangad et al. | A unified framework for contactless hand verification | |
WO2017059591A1 (en) | Finger vein identification method and device | |
CN105825176A (en) | Identification method based on multi-mode non-contact identity characteristics | |
CN103886283A (en) | Method for fusing multi-biometric image information for mobile user and application thereof | |
CN101344914A (en) | Human face recognition method based on characteristic point | |
CN103679136A (en) | Hand back vein identity recognition method based on combination of local macroscopic features and microscopic features | |
CN114863499B (en) | Finger vein and palm vein identification method based on federal learning | |
CN109145791A (en) | One kind being based on the contactless fingers and palms recognition methods in mobile terminal and system | |
Akhloufi et al. | Thermal faceprint: A new thermal face signature extraction for infrared face recognition | |
CN104898971B (en) | A kind of mouse pointer control method and system based on Visual Trace Technology | |
KR101266603B1 (en) | A face recognition system for user authentication of an unmanned receipt system | |
CN105184273B (en) | A kind of dynamic image front face reconstructing system and method based on ASM | |
Burge et al. | Using ear biometrics for passive identification | |
CN104573628A (en) | Three-dimensional face recognition method | |
Batool et al. | Assessment of facial wrinkles as a soft biometrics | |
Akhloufi et al. | Infrared face recognition using distance transforms | |
Kishore et al. | A model for real time sign language recognition system | |
Sokhib et al. | A combined method of skin-and depth-based hand gesture recognition. | |
Jawale et al. | Ear based attendance monitoring system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |