CN106599785B - Method and equipment for establishing human body 3D characteristic identity information base - Google Patents

Method and equipment for establishing human body 3D characteristic identity information base Download PDF

Info

Publication number
CN106599785B
CN106599785B CN201611002201.0A CN201611002201A CN106599785B CN 106599785 B CN106599785 B CN 106599785B CN 201611002201 A CN201611002201 A CN 201611002201A CN 106599785 B CN106599785 B CN 106599785B
Authority
CN
China
Prior art keywords
human body
information
characteristic
identity information
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611002201.0A
Other languages
Chinese (zh)
Other versions
CN106599785A (en
Inventor
黄源浩
肖振中
许宏淮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbbec Inc
Original Assignee
Shenzhen Orbbec Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Orbbec Co Ltd filed Critical Shenzhen Orbbec Co Ltd
Priority to CN201611002201.0A priority Critical patent/CN106599785B/en
Publication of CN106599785A publication Critical patent/CN106599785A/en
Application granted granted Critical
Publication of CN106599785B publication Critical patent/CN106599785B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides a method and equipment for establishing a human body 3D characteristic identity information base. The method comprises the following steps: collecting an individual RGBD human body atlas, wherein the identity information of the individual is known; acquiring 3D space distribution characteristic information of the human body characteristic points of the person through the RGBD human body atlas; and identifying the identity information of the person to the 3D space distribution feature information of the human body feature point corresponding to the person to obtain personal information, and storing the personal information to form a human body 3D feature identity information base. The device comprises a human body acquisition module, a human body information acquisition module and an information base module. The invention can obtain the 3D space distribution characteristic information of the human body characteristic points, which comprises color information and depth information, thereby being not influenced by different seasons, clothes of people, environmental illumination change and the like, and improving the accuracy of human body identification.

Description

Method and equipment for establishing human body 3D characteristic identity information base
Technical Field
The invention relates to the field of a method for establishing a human body 3D characteristic identity information base, in particular to a method and equipment for establishing a human body 3D characteristic identity information base.
Background
Information security issues have attracted widespread attention in all societies. The main approach for ensuring the information security is to accurately identify the identity of the information user and further judge whether the authority of the user for obtaining the information is legal or not according to the identification result, thereby achieving the purposes of ensuring that the information is not leaked and ensuring the legal rights and interests of the user. Therefore, reliable identification is very important and essential.
Face recognition is a biometric technology for identifying an identity based on facial feature information of a person, and the face recognition technology is receiving more and more attention as a safer and more convenient personal identification technology. The traditional face recognition technology is 2D face recognition, the 2D face recognition has no depth information, and is easily influenced by non-geometric appearance changes such as postures, expressions, illumination, facial makeup and the like, so that accurate face recognition is difficult to perform.
Disclosure of Invention
The invention provides a method and equipment for establishing a human body 3D characteristic identity information base, which can solve the problem of low human body identification accuracy in the prior art.
In order to solve the technical problems, the invention adopts a technical scheme that: a method for establishing a human body 3D characteristic identity information base is provided, and the method comprises the following steps: collecting an individual RGBD human body atlas, wherein the identity information of the individual is known; acquiring 3D space distribution characteristic information of the human body characteristic points of the person through the RGBD human body atlas; and identifying the identity information of the person to the 3D space distribution feature information of the human body feature point corresponding to the person to obtain personal information, and storing the personal information to form a human body 3D feature identity information base.
And the human body 3D characteristic identity information base carries out hierarchical classification management on the identity information.
Wherein the hierarchy includes a personal attribute hierarchy and a group attribute hierarchy.
Wherein the RGBD human body atlas is an RGBD human body image sequence; the step of obtaining the 3D spatial distribution characteristic information of the human body feature points of the person through the RGBD human body atlas further includes: acquiring human body dynamic characteristic information according to the RGBD human body image sequence; the steps of identifying the identity information of the individual to the human body 3D characteristic information corresponding to the individual to obtain personal information, and storing the personal information to form a human body 3D characteristic identity information base are as follows: and identifying the identity information of the individual to the human body 3D characteristic information and the human body dynamic characteristic information corresponding to the individual to obtain personal information, and storing the personal information to form a human body 3D characteristic identity information base.
Wherein, after the step of identifying the identity information of the individual to the human body 3D feature information corresponding to the individual to obtain the personal information and storing the personal information to form the human body 3D feature identity information base, the method further comprises: and carrying out human body recognition training on the human body 3D characteristic identity information base.
Wherein, the step of performing human body recognition training on the human body 3D characteristic identity information base comprises: collecting an RGBD human body atlas of a tester with known identity information; acquiring 3D space distribution characteristic information of the human body characteristic points of the test person from the RGBD human body atlas of the test person; comparing the acquired 3D space distribution characteristic information of the human body characteristic points of the test person with the 3D space distribution characteristic information of the human body characteristic points in the human body 3D characteristic identity information base; and if the comparison result is correct, storing the RGBD human body atlas of the test person, the corresponding 3D space distribution characteristic information of the human body characteristic points and the identity information into the human body 3D characteristic identity information base.
The test person comprises a person with personal information stored in the human body 3D characteristic identity information base and a person without personal information stored in the human body 3D characteristic identity information base.
Wherein the step of collecting the individual RGBD body atlas further comprises: collecting the RGBD face atlas of the person; the step of obtaining the 3D spatial distribution characteristic information of the human body feature points of the person through the RGBD human body atlas further includes: acquiring 3D space distribution characteristic information of the human face characteristic points of the person through the RGBD human face atlas; the step of identifying the identity information of the individual to the 3D space distribution feature information of the human body feature point corresponding to the individual to obtain the individual information, and storing the individual information to form a human body 3D feature identity information base comprises the following steps: and identifying the identity information to the 3D space distribution characteristic information of the human face characteristic points and the 3D space distribution characteristic information of the human body characteristic points to obtain personal information, and storing the personal information to form a human body 3D characteristic identity information base.
In order to solve the technical problem, the invention adopts another technical scheme that: the equipment for establishing the human body 3D characteristic identity information base comprises a human body acquisition module, a human body information acquisition module and an information base module; the human body acquisition module is used for acquiring an individual RGBD human body atlas, wherein the identity information of the individual is known; the human body information acquisition module is connected with the human body acquisition module and used for acquiring the 3D space distribution characteristic information of the human body characteristic points of the person through the RGBD human body atlas; the information base module comprises a storage module, wherein the storage module is connected with the human body information acquisition module and is used for identifying the identity information of the individual to the 3D space distribution feature information of the human body feature points corresponding to the individual to obtain the individual information and storing the individual information to form a human body 3D feature identity information base.
The information base module further comprises a management module, the management module is connected with the storage module, and the management module is used for carrying out hierarchical classification management on the identity information.
Wherein the hierarchy includes a personal attribute hierarchy and a group attribute hierarchy.
The RGBD human body atlas collected by the human body collecting module is an RGBD human body image sequence; the device also comprises a dynamic information acquisition module which is connected with the human body acquisition module and used for acquiring human body dynamic characteristic information according to the RGBD human body image sequence; the storage module is further connected with the dynamic information acquisition module and is used for identifying the identity information of the individual to the human body 3D characteristic information and the human body dynamic characteristic information corresponding to the individual to acquire individual information and storing the individual information to form a human body 3D characteristic identity information base.
The device further comprises a training module, wherein the training module is connected with the human body acquisition module, the human body information acquisition module and the information base module and used for carrying out human body identification training on the human body 3D characteristic identity information base.
The training module comprises a control module and a comparison module; the control module is connected with the human body acquisition module and the human body information acquisition module, and is used for controlling the human body acquisition module to acquire an RGBD human body atlas of a tester with known identity information, and controlling the human body information acquisition module to acquire 3D space distribution characteristic information of human body characteristic points of the tester from the RGBD human body atlas of the tester; the comparison module is connected with the control module and is used for comparing the acquired 3D space distribution characteristic information of the human body characteristic points of the test person with the 3D space distribution characteristic information of the human body characteristic points in the human body 3D characteristic identity information base; the storage module is further configured to store the RGBD human body atlas of the test person, the corresponding 3D spatial distribution feature information of the human body feature points, and the identity information into the human body 3D feature identity information base when the comparison result is correct.
The test person comprises a person with personal information stored in the human body 3D characteristic identity information base and a person without personal information stored in the human body 3D characteristic identity information base.
The equipment also comprises a face acquisition module and a face information acquisition module; the face acquisition module is used for acquiring the RGBD face atlas of the person; the face information acquisition module is connected with the face acquisition module and used for acquiring the 3D space distribution characteristic information of the face characteristic points of the person through the RGBD face atlas; the storage module is further connected with the face information acquisition module and is used for identifying the identity information to the 3D space distribution characteristic information of the face characteristic points and the 3D space distribution characteristic information of the human body characteristic points to obtain personal information, and storing the personal information to form a human body 3D characteristic identity information base.
The invention has the beneficial effects that: different from the prior art, the invention obtains the 3D space distribution characteristic information of the human body characteristic points through the RGBD human body atlas, then identifies the personal identity information to the human body characteristic point 3D space distribution characteristic information corresponding to the person and stores the same to form a human body 3D characteristic identity information base for human body identification, since the 3D spatial distribution feature information of the human feature points includes color information and depth information, a human skeleton can be established, and therefore, the human body information in the human body 3D characteristic identity information base is more comprehensive, and can be more accurately identified when the human body is identified, because the human body information in the human body 3D characteristic identity information base is 3D information, the human body identification cannot be influenced by different seasons, human clothes, environmental illumination changes and the like, and the accuracy of the human body identification can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for establishing a human body 3D characteristic identity information base according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a human body 3D characteristic identity information base according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating a method for establishing a human body 3D characteristic identity information base according to a second embodiment of the present invention;
FIG. 4 is a schematic diagram of identity information hierarchical classification management of a human body 3D characteristic identity information base according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating a method for establishing a human body 3D characteristic identity information base according to a third embodiment of the present invention;
FIG. 6 is a schematic flow chart of step S34 in FIG. 5;
fig. 7 is a schematic diagram of a human body 3D feature identity information base according to an embodiment of the present invention during recognition training;
fig. 8 is a schematic diagram of another human body 3D feature identity information base according to an embodiment of the present invention during recognition training;
fig. 9 is a schematic diagram of a human body 3D feature identity information base according to another embodiment of the present invention during recognition training;
fig. 10 is a flowchart illustrating a method for establishing a human body 3D characteristic identity information base according to a fourth embodiment of the present invention;
fig. 11 is a schematic structural diagram of an apparatus for creating a human body 3D feature identity information base according to a first embodiment of the present invention;
fig. 12 is a schematic structural diagram of an apparatus for creating a human body 3D feature identity information base according to a second embodiment of the present invention;
fig. 13 is a schematic structural diagram of an apparatus for creating a human body 3D feature identity information base according to a third embodiment of the present invention;
fig. 14 is a schematic structural diagram of an apparatus for creating a human body 3D feature identity information base according to a fourth embodiment of the present invention;
fig. 15 is a schematic structural diagram of an entity apparatus of an apparatus for creating a human body 3D feature identity information base according to a fourth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for establishing a human body 3D feature identity information base according to a first embodiment of the present invention.
The method for establishing the human body 3D characteristic identity information base comprises the following steps:
s11: an RGBD body atlas of an individual is acquired in which identity information of the individual is known.
In step S11, the RGBD human body atlas may be acquired by using a Kinect sensor, the RGBD human body atlas includes color information (RGB) and Depth information (Depth) of a human body, and the Depth information is increased compared to a conventional 2D image.
The personal identity information may include personal basic information such as the name, sex, age, nationality, native place, contact address, work unit, department, unit address, etc. of the person.
In some embodiments, when multiple persons are present in the lens, RGBD body images of the multiple persons are then acquired.
S12: and acquiring the 3D space distribution characteristic information of the human body characteristic points of the person through the RGBD human body atlas.
Specifically, step S12 includes:
s121: and collecting human body characteristic points through the RGBD human body image.
Specifically, the present embodiment performs the collection of the human body feature points by collecting human body parts, wherein the human body parts include: one or more of a torso, limbs, and a head.
The feature points may be acquired by various methods, for example, by manually marking feature points of the face, such as the eyes, nose, and other five sense organs, the cheek, the mandible, and the edge thereof, or by determining the feature points of the face by a face feature point marking method compatible with RGB (2D), or by automatically marking the feature points.
For example, automatically marking feature points requires three steps:
firstly, segmenting a human body. In the embodiment, a method combining interframe difference and background difference is adopted to segment a moving human body, one frame in an RGBD image is selected as a background frame in advance, a Gaussian model of each pixel point is established, then an interframe difference method is used for carrying out difference processing on two adjacent frames of images, background points and changed regions (the changed regions in the current frame comprise an exposed region and a moving object) are distinguished, then model fitting is carried out on the changed regions and the corresponding regions of the background frame to distinguish the exposed region and the moving object, and finally a shadow is removed from the moving object, so that the moving object without the shadow is segmented. When updating the background, determining the interframe difference as a background point, and updating according to a certain rule; and if the background difference is determined to be the point of the exposed area, updating the background frame at a higher updating rate, and not updating the area corresponding to the moving object. This method can obtain a more ideal segmentation target.
And (II) extracting and analyzing the contour. After the binarized image is acquired, the contour is acquired using some classical edge detection algorithm. For example, by adopting a Canny algorithm, a Canny edge detection operator fully reflects the mathematical characteristics of an optimal edge detector, has good signal-to-noise ratio and excellent positioning performance for different types of edges, generates low probability of multiple responses to a single edge and has the maximum inhibition capability on false edge responses, and after an optical flow segmentation field is obtained by utilizing the segmentation algorithm, all concerned moving objects are contained in the segmentation areas. Therefore, the Canny operator is used for extracting the edges in the segmentation areas, so that on one hand, background interference can be greatly limited, and on the other hand, the running speed can be effectively improved.
And (III) automatically marking the joint. The moving target is obtained through a difference method, after the Canny edge detection operator extracts the contour, the human body target is further analyzed through a 2D belt model (Ribbonmodel) of MaylorK, LeungandYee-HongYang. The model divides the front of the body into different areas, for example, the body is constructed with 5U-shaped areas representing the head and limbs of the body, respectively.
Thus, by finding the 5U-shaped body endpoints, the approximate location of the body can be determined, extracting the required information by vector contour compression based on the extracted contour, preserving the most prominent human extremity features, compressing the human contour into a fixed shape, e.g., such that the contour has fixed 8 endpoints and 5U-shaped points and 3 inverted U-shaped points, so that the apparent features facilitate the calculation of the contour. Here, the contour may be compressed using a distance algorithm of adjacent end points on the contour, and the contour is compressed into 8 end points through an iterative process.
After the compressed contour is obtained, the feature points can be automatically labeled by adopting the following algorithm:
(1) a U-shaped body end point is determined. Given a certain reference length M, a vector greater than M can be considered as a part of the body contour, and a vector smaller than M can be ignored. Searching from a certain point according to the vectorized contour, finding a vector larger than M as Mi, finding the next vector as Mj, comparing included angles from Mi to Mj, considering the included angles as U endpoints if the included angles are within a certain range (0-90 degrees) (note that the included angles are positive and indicate that the included angles are convex), and recording the two vectors to find the U endpoint. This is done until 5U endpoints are found.
(2) The end points of the three inverted U-shapes are determined. In the same step (1), the included angle condition is changed from positive to negative.
(3) The positions of the head, the hands and the feet can be easily obtained according to the end points of the U and the inverted U. According to the physiological shape of the body, each joint point can be determined, and the width and the length of the trunk can be respectively determined by utilizing the intersection angle part of the arms and the body and the intersection angle part of the head and the legs; then, the neck and waist positions account for 0.75 and 0.3 of the trunk respectively, the elbows are positioned at the midpoints of the shoulders and the hands, and the knees are positioned at the midpoints of the waist and the feet. Thus, the approximate position of each feature point can be defined.
S122: and establishing a human body 3D grid according to the human body feature points.
S123: and measuring the characteristic value of the human body characteristic points according to the human body 3D grid and calculating the 3D space distribution characteristic information of the human body characteristic points.
The feature values in step S123 include one or more of height, arm length, shoulder width, palm size, and head size. The spatial position information of each human body feature point can be calculated through the human body 3D grid, so that the topological relation among the human body feature points can be calculated, the three-dimensional human body shape information can be obtained, and the 3D spatial distribution feature information of the human body feature points can be obtained. When the human body is identified in the later period, the human body can be identified through the 3D space distribution characteristic information of the human body.
S13: and identifying the identity information of the person to the 3D space distribution characteristic information of the human body characteristic point corresponding to the person to obtain personal information, and storing the personal information to form a human body 3D characteristic identity information base.
The personal information in this embodiment includes personal identity information and 3D spatial distribution characteristic information of the human body characteristic points, and after the identity information is identified to the 3D spatial distribution characteristic information of the human body characteristic points, when face recognition is performed in a later period, people with the same 3D spatial distribution characteristic information of the human body characteristic points are recognized, that is, the identity information corresponding to the 3D spatial distribution characteristic information of the human body characteristic points can be obtained. In some embodiments, the personal information includes identity information and 3D spatial distribution feature information of human body feature points and a corresponding RGBD atlas, as shown in fig. 2, fig. 2 is a schematic diagram of a human body 3D feature identity information library provided in an embodiment of the present invention.
Different from the prior art, the invention obtains the 3D space distribution characteristic information of the human body characteristic points through the RGBD human body atlas, then identifies the personal identity information to the human body characteristic point 3D space distribution characteristic information corresponding to the person and stores the same to form a human body 3D characteristic identity information base for human body identification, since the 3D spatial distribution feature information of the human feature points includes color information and depth information, a human skeleton can be established, and therefore, the human body information in the human body 3D characteristic identity information base is more comprehensive, and can be more accurately identified when the human body is identified, because the human body information in the human body 3D characteristic identity information base is 3D information, the human body identification cannot be influenced by different seasons, human clothes, environmental illumination changes and the like, and the accuracy of the human body identification can be improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating a method for establishing a human body 3D feature identity information base according to a second embodiment of the present invention.
S21: acquiring an RGBD body image sequence of a person, wherein identity information of the person is known
S22: and acquiring 3D space distribution characteristic information of the personal human body characteristic points through the RGBD human body image and acquiring human body dynamic characteristic information according to the RGBD human body image sequence.
The step S22 is executed according to the dynamic characteristics of the human body' S standing, walking, running, etc. and the process of the specific dynamic behavior, such as the finger-palm crossing process and result, the two-arm crossing process and result, etc.
Specifically, in the present embodiment, by using a dynamic continuous RGBD image sequence, the motion posture of the human body can be detected, and attribute items of feature recognition are added, for example: if the target is rigid body articles such as cups, automobiles and the like, the target is continuously represented as a rigid body in the continuous RGBD images, and the target is identified as the rigid body; if the target is an animal such as a human, a cat, a dog and the like, the target is tracked according to the continuous dynamic RGBD, the non-rigid body is detected, and accurate human body recognition is further performed according to the technologies such as human body feature recognition and the like.
In some embodiments, the identification authentication can be performed by collecting animal or human characteristics such as voice, body temperature and the like, so that the authentication identification system is prevented from being cracked by images, sound recordings and the like, and the identification accuracy is improved.
To acquire the dynamic characteristic information of the human body, firstly, human body motion detection is required, namely, the process of determining the standard, the size and the posture of the moving human body in the acquired image sequence. There are various methods for detecting human body movement, for example, OGHMs (organic Gaussian-transmitter momentions) detection method, whose basic principle is: and judging whether the pixel point belongs to the foreground motion area or not by comparing the change degree of the corresponding pixel value between the temporally continuous image frames.
An input image sequence is represented by { f (x, y, t) | t ═ 0,1,2 … }, f (x, y, t) represents an image at the time t, x, y represent coordinates of pixel points on the image, and assuming that a Gaussian function is g (x, σ), bn (t) is a product of g (x, σ) and a Hermite polynomial, an n-order hmogs can be represented as:
Figure BDA0001152277580000101
wherein a isiDetermined by the standard deviation σ of the Gaussian function. Depending on the nature of the convolution operation, OGHMs of order n can be viewed as the convolution of the sum of the derivatives of the image sequence function in order of time with a Gaussian function. The larger the derivative value of a certain point is, the larger the change of the pixel value at the point position along with the change of time is, and the point is supposed to belong to the motion areaAnd a domain block, which provides a theoretical basis for the OGHMs method to detect moving objects. In addition, from equation (1), the basis functions of OGHMs are
Figure BDA0001152277580000111
This is a linear combination of the different order derivatives of the Gaussian function. Because the gaussian function itself has the ability to smooth noise, OGHMs also have the ability to effectively filter out various types of noise.
For example, the Temporal Difference method (Temporal Difference) is a method of extracting a motion region in an image by thresholding using a Temporal Difference between pixels of several adjacent frames before and after a temporally continuous image sequence. Early methods used the difference between two adjacent frames to obtain moving objects, e.g. set FkIs the data of the gray level of the k frame image in the image sequence, Fk+1Representing the gray value data of the (k + 1) th frame image in the image sequence, the differential image of two time-adjacent frames is defined as:
Figure BDA0001152277580000112
where T is the threshold. If the difference is larger than T, the gray scale change of the area is large, namely the detected moving target area is needed.
As another example, the Optical Flow method (Optical Flow), which is based on the following assumptions: the change in image grey scale is due solely to the motion of the object or background. That is, the gray levels of the object and the background do not change with time. The motion detection based on the optical flow method utilizes the characteristic that a moving object shows a velocity field in an image along with the time change, and estimates the optical flow corresponding to the motion according to a certain constraint condition.
For another example, Background Subtraction method (Background Subtraction) is based on the basic principle that a Background model image is first constructed, then a difference is made between a current frame image and a Background frame image, and a moving object is detected by thresholding the difference result. Suppose that the background frame image at time t is F0Corresponding to the current frame image as FtThen is at presentThe difference between the frame and the background frame can be expressed as:
Figure BDA0001152277580000113
if the gray value difference of corresponding pixels of the current frame image and the background frame image is greater than the threshold value, the corresponding value in the obtained binary image is 1, and the region is determined to belong to the moving target.
After the human motion pose is detected, the human motion pose is represented by a Motion History Image (MHI) and a Motion Energy Image (MEI).
The method comprises the steps of representing human body action gestures by adopting a Motion History Image (MHI) and a Motion Energy Image (MEI), wherein the MEI reflects the area and the intensity of the human body action gestures, and the MHI reflects how the human body action gestures occur and how the human body action gestures change in time to a certain extent.
The binary image MEI is generated as follows:
Figure BDA0001152277580000121
wherein: b (x, y, n) is a binary image sequence representing a region where a human motion gesture occurs, and parameter τ represents the duration of the human motion gesture. Thus, the MEI describes the area where the whole body motion gesture occurs.
The MHI is generated as follows:
Figure BDA0001152277580000122
the movement history image MH I reflects not only the shape but also the distribution of the brightness and the direction in which the action posture of the human body occurs. In MHI, the luminance value of each pixel is proportional to the duration of motion of the position motion gesture, and the luminance value of the pixel in the most recently occurring motion gesture is the largest, and the change in gray level reflects the direction in which the motion gesture occurs.
Method for establishing action attitude model by adopting invariant momentStatistical description of the plates. The invariant moment is as follows: m'k=lg|MkL, wherein: k is 1,2, …, 7. Denote the feature vector as F ═ M'1,M’2,…M’7]By F1,F2,…,FMRepresenting M images of the body's motion pose in the image library, pair FiThe corresponding feature vector is denoted as Fi=[M’i1,M’i2,…,M’i7]In this way, from the human motion posture image library, the feature matrix F of M × 7 ═ M 'of the motion posture can be obtained'ijOf which is M'ijIs FiThus, the mean vector and covariance matrix of the feature vector set of M human body motion posture images can be obtained, and the statistical description of the motion posture template is established.
S23: and identifying the identity information of the person to the human body 3D characteristic information and the human body dynamic characteristic information corresponding to the person to obtain personal information, and storing the personal information to form a human body 3D characteristic identity information base.
The human body 3D characteristic information acquired from the human body RGBD image sequence in the embodiment not only comprises the 3D space distribution characteristic information of the human body characteristic points, but also comprises the human body dynamic characteristic information, and the attribute items of characteristic identification are increased.
In one embodiment, the identity information is subjected to hierarchical classification management by a human body 3D feature identity information base, wherein the hierarchy comprises a personal attribute hierarchy and a group attribute hierarchy.
For example, the human body 3D feature identity information base extracts 3D spatial distribution feature information of human body feature points and common features of human body dynamic feature information from 3D feature information of population with population attributes.
Referring to fig. 4, fig. 4 is a schematic diagram of identity information hierarchical classification management of a human body 3D feature identity information base according to an embodiment of the present invention.
For example, the personal attribute hierarchy includes a collection of information such as name, gender, age, identification number, etc. of a person having uniqueness. The group attribute comprises non-unique group level information of the old, children and young people, or staff of the same company and staff of the same office building, or men and women, or Asians, Europe and African.
As shown in fig. 4, person a and person B are both children, person C, person D, and person E are all young people, and person C is a worker of office building "1" and person D and person E are workers of office building "2".
It is understood that the above described hierarchical division of the identity information is only one hierarchical division in this embodiment, and other embodiments may also have other divisions.
The hierarchical classification management can make the face recognition more convenient and faster.
For example, a common characteristic of children is extracted as a and a common characteristic of adults is extracted as b according to the hierarchy division. For example, part of game items in a playground are only disclosed to children, and when entering the playground, identity recognition is needed, and at the moment, whether the person to be detected is the children or not can be judged by acquiring information such as individual skeletons and gaits. For example, when identifying the person a, the identification system acquires the RGBD human body image sequence of the person a, thereby acquiring the 3D spatial distribution feature information of the human body feature points of the person a and the human body dynamic feature information of the walking posture of the person a, and extracts the feature a from the 3D spatial distribution feature information of the human body feature points of the person a and the human body dynamic feature information of the walking posture of the person a, so that it can be determined that the person a is a child, and therefore the person a is allowed to pass. When the identity of the individual C is identified, the identification system acquires an RGBD human body image sequence of the individual C, and the feature a cannot be extracted from the 3D space distribution feature information of the human body feature point of the individual C and the human body dynamic feature information of the walking posture, so that the individual C is judged not to be a child, and the individual C is not allowed to pass through. In the identification process, only partial information is needed to identify whether the person to be detected is a child, and identity information of specific personal attributes of the person to be detected does not need to be identified.
For example, the system for identifying the identity used when turning on or off the television only needs to identify whether the person to be tested is an adult or a child, so that different authorities can be opened according to different people to watch different television programs. When the identity recognition system only obtains the RGBD human body image sequence of the lower limb walking posture of the individual B, the 3D spatial distribution characteristic information of the human body characteristic points of the lower limbs of the individual B and the human body dynamic characteristic information of the walking posture can be obtained through the RGBD human body image sequence of the lower limb walking posture of the individual B, and the characteristic a can be extracted from the 3D spatial distribution characteristic information of the human body characteristic points of the lower limbs of the individual B and the human body dynamic characteristic information of the walking posture, so that the individual B is judged to be a child, and the television automatically opens the authority of the child. When the identity recognition system obtains the RGBD human body image sequence of the lower limb walking posture of the individual D, the 3D space distribution characteristic information of the human body characteristic points of the lower limbs of the individual D and the human body dynamic characteristic information of the walking posture can be obtained through the RGBD human body image sequence of the lower limb walking posture of the individual D, and the characteristic b can be extracted from the 3D space distribution characteristic information of the human body characteristic points of the lower limbs of the individual D and the human body dynamic characteristic information of the walking posture, so that the individual D is judged to be an adult, and the television automatically opens the authority of the adult.
The identification process can judge whether the person to be detected is an adult or a child through less 3D space distribution characteristic information or human dynamic characteristic information of partial human characteristic points, and does not need to acquire more information to judge who the person to be detected is.
For another example, in an entrance guard system of an office building, the entrance guard system of the office building "1" only needs to determine whether the identified person is the group attribute of the staff in the office building "1", and does not need to determine the personal attribute of the name, age, etc. of the identified person. Therefore, when passing through the office building entrance guard system, the RGBD image of the human body acquired by the entrance guard system may not be fine enough, for example, only the RGBD image sequence when the lower limb of the person C moves is acquired, or only the RGBD image … … of the whole of the person D is acquired, the entrance guard system may call the identification of the identity information of the group attribute hierarchy in the human body 3D feature identity information base, for example, the entrance guard system acquires the 3D spatial distribution feature information of the human body feature points of the lower limb of the person C through the image sequence of the lower limb of the person C, and acquires the dynamic feature information of the walking posture of the person C, although it cannot judge who the person C is, it can be known from the information stored in the human body 3D feature information base that there are individuals having the 3D spatial distribution information of the human body feature points of the same lower limb and the dynamic feature information of the same walking posture among the acting personnel in the office building "1", thus allowing person C to enter office building "1".
Or, the access control system of office building "2" can acquire the whole human body feature point 3D spatial distribution feature information of individual D through the whole image of individual D, thereby acquiring the human body skeleton of individual D, and can know according to the information stored in the human body 3D feature identity information base, and the individual who has the same whole human body feature point 3D spatial distribution feature information (human body skeleton) exists in the office building "2", and does not need to judge who the person to be detected is, and then the person to be detected can be allowed to enter the office building "2".
If the person D enters the office building 1, the access control system of the office building 1 acquires the whole image sequence of the person D, so that the 3D space distribution characteristic information of the whole body characteristic points of the person D and the personal dynamic characteristic information of the walking posture are acquired, and the person D is not allowed to enter the office building 1 because the 3D space distribution characteristic information of the same whole body characteristic points and the person of the personal dynamic characteristic information of the walking posture do not exist in the office building 1 according to the information stored in the body 3D characteristic identity information base.
Referring to fig. 5, fig. 5 is a flowchart illustrating a method for establishing a human body 3D feature identity information base according to a third embodiment of the present invention.
S31: an RGBD body atlas of an individual is acquired in which identity information of the individual is known.
S32: and acquiring the 3D space distribution characteristic information of the human body characteristic points of the person through the RGBD human body atlas.
S33: and identifying the identity information of the person to the 3D space distribution characteristic information of the human body characteristic point corresponding to the person to obtain personal information, and storing the personal information to form a human body 3D characteristic identity information base.
S34: and carrying out human body recognition training on the human body 3D characteristic identity information base.
The difference between this embodiment and the first embodiment is that, by adding step S34, performing human body recognition training on the human body 3D feature identity information base can improve the richness of information resources in the human body 3D feature identity information base, thereby improving the accuracy of human body recognition.
As shown in fig. 6, fig. 6 is a schematic flowchart of step S34 in fig. 5. Specifically, step S34 includes:
s341: and collecting an RGBD human body atlas of the test person with known identity information.
In step S341, the test person includes a person whose personal information is already stored in the human body 3D characteristic identity information base and a person whose personal information is not already stored in the human body 3D characteristic identity information base. Wherein, the RGBD human body atlas may be a collection of a plurality of discontinuous RGBD human body images.
The identity information of the tester is known, and may refer to that the identity information of the tester is partially or completely known, for example, the identity information of the personal attribute hierarchy of the tester is completely or partially known, and the identity information of the group attribute hierarchy is unknown, or the identity information of the group attribute hierarchy of the tester is partially or completely known, and the identity information of the personal attribute hierarchy is unknown, or the identity information of the personal attribute hierarchy of the tester and the identity model of the group attribute hierarchy are all known.
S342: and acquiring 3D space distribution characteristic information of the human body characteristic points of the test person from the RGBD human body atlas of the test person.
The method for acquiring the 3D spatial distribution feature information of the human body feature points in step S342 is the same as the method in step S12 of the first embodiment, and is not described herein again.
S343: and comparing the acquired 3D space distribution characteristic information of the human body characteristic points of the test person with the 3D space distribution characteristic information of the human body characteristic points in the human body 3D characteristic identity information base.
For example, the 3D spatial distribution characteristic information of the human body characteristic point of the test person is compared with the 3D spatial distribution characteristic information of the human body characteristic point of a certain person in the human body 3D characteristic identity information base to obtain that the similarity between the 3D spatial distribution characteristic information of the human body characteristic point of the test person and the 3D spatial distribution characteristic information of the human body characteristic point of the person X in the human body 3D characteristic identity information base reaches a predetermined threshold, and it can be determined that the test person is the person X stored in the human body 3D characteristic identity information base, and if the similarity does not reach the predetermined threshold, it is determined that the test person is not the person stored in the human body 3D characteristic identity information base.
For example, when testing a person who has personal information stored in the human body 3D characteristic identity information base, if the comparison result is that the test person corresponds to the personal information stored in the human body 3D characteristic identity information base, it indicates that the comparison result is correct, and the process proceeds to step S344; if the comparison result indicates that the personal information of the test person is not stored in the human body 3D characteristic identity information base, the comparison result is wrong, so that the information stored in the human body 3D characteristic identity information base by the test person needs to be corrected, and the personal information is further enriched.
For another example, when the test person is a person who does not store personal information in the human body 3D characteristic identity information base, if the comparison result is that the information of the test person does not exist in the human body 3D characteristic identity information base, it indicates that the comparison result is correct, and step S344 is performed to collect the personal information of the test person; if the comparison result is that the testing person is a person in the human body 3D characteristic identity information base, the comparison result is wrong, the information of the person in the human body 3D characteristic identity information base needs to be corrected, the information of the person is further enriched, and meanwhile, the personal information of the testing person is also stored in the human body 3D characteristic identity information base, so that the enrichment degree of the information resources of the human body 3D characteristic identity information base is improved.
Specifically, the matching of the human body dynamic characteristic information can be realized by the following steps: and measuring the similarity between the newly input action gesture and the stored known action gesture template through the Mahalanobis distance, wherein the matching is considered to be successful as long as the calculated Mahalanobis distance is within a specified threshold range, and if more than one action gesture is matched, the matching with the minimum distance is selected as a successful matching. The Mahalanobis distance is calculated as follows:
γ2=(f-μx)Tc-1(f-μx)
wherein gamma is the Mahalanobis distance, f is the invariant moment characteristic vector of the improved human motion posture image, and muxIs the mean vector of the set of trained feature vectors, and c is the covariance matrix of the set of trained feature vectors.
S344: and storing the RGBD human body atlas of the test person, the 3D space distribution characteristic information and the identity information of the corresponding human body characteristic points into a human body 3D characteristic identity information base.
The RGBD human body atlas, the 3D space distribution characteristic information and the identity information of the human body characteristic points, collected in the recognition training, of the tester are stored in the corresponding personal information in the human body 3D characteristic identity information base, so that the information resources in the human body 3D characteristic identity information base are richer, and the accuracy of later-stage face recognition is improved.
For example, in a human body 3D feature identity information base which is initially established, part of identity information of 500 persons is identified to an RGBD human body atlas and 3D spatial distribution feature information of human body feature points corresponding to the 500 persons by a manual method, and is stored in the human body 3D feature identity information base. In the process of recognition training, collecting RGBD human body atlas of 5000 or 50000 or even more people for recognition training, marking at least part of personal identity information on the RGBD human body atlas of a tested person, storing a large amount of personal information of the tested person again, and if the tested person is originally in the human body 3D characteristic identity information base, continuously supplementing the RGBD atlas of the person, the 3D space distribution characteristic information of the human body characteristic points and the identity information.
As shown in fig. 7, fig. 7 is a schematic diagram of a human body 3D feature identity information base provided in an embodiment of the present invention during recognition training. The RGBD human body image set of the person G, the 3D space distribution characteristic information of the human body characteristic points acquired by the RGBD human body image set and the identity information about the personal attributes are stored in the human body 3D characteristic identity information base, when identification training is carried out, the acquired RGBD human body image set of the person G possibly comprises RGBD human body images with more angles, and 3D space distribution characteristic information of more human body characteristic points can be acquired from the RGBD human body images, and during identification training, the identity information of group attributes of a working unit, a building where the working unit is located and the like is identified in the RGBD human body image set of the person G, so that the same person is identified in the RGBD human body 3D characteristic identity information base of the test person and the human body, and therefore the RGBD human body image set of the person G, the 3D space distribution characteristic information of the human body characteristic points and the identity information of the group attributes of the identification are all stored in the human body 3D characteristic identity information base during identification training In the personal information of the person G in the D characteristic identity information base, the personal information of the person G in the human body 3D characteristic identity information base is richer.
For example, the RGBD human body image set of the individual F and the acquired 3D spatial distribution characteristic information of the human body characteristic points are stored in the human body 3D characteristic identity information base, and the information of the name, work unit, office building, etc. of the individual F is manually identified, but the gender and age level of the individual F cannot be judged through the limited RGBD human body images acquired, when the identification training is performed, the RGBD human body image sequence of the individual F is acquired, the 3D spatial distribution characteristic information of more human body characteristic points and the human body dynamic characteristic information are acquired from the RGBD human body image sequence, so that the common characteristic of males or females and the common characteristic of young people and old people stored in the human body 3D characteristic identity information base can be extracted from the newly added 3D spatial distribution characteristic information and human body dynamic characteristic information of the human body characteristic points, for example, in this embodiment, the common features of women and the common features of young people are extracted, so that the sex and age level identity information of the person F can be obtained through recognition training, and the identity information is stored in the personal information of the person F in the human body 3D feature identity information base, as shown in fig. 8, fig. 8 is a schematic diagram of another human body 3D feature identity information base provided in the embodiment of the present invention during recognition training.
For another example, when the human body 3D feature identity information base does not store any personal information of the person H, during the recognition training, an RGBD human body atlas of the person H is acquired, and 3D spatial distribution feature information of the human body feature points is acquired from the RGBD human body atlas, and at least part of the identity information of the person H is identified in the RGBD human body atlas, as shown in fig. 9, fig. 9 is a schematic diagram of the human body 3D feature identity information base provided in the embodiment of the present invention during the recognition training. In the identification training process, the comparison result indicates that the person H is not stored in the human body 3D characteristic identity information base, so that the personal information of the person H, including the RGBD human body atlas, the 3D space distribution characteristic information of the human body characteristic points and the identity information, is stored in the human body 3D characteristic identity information base, and the file of the person H is established in the human body 3D characteristic identity information base.
In other embodiments, the step of "performing human body recognition training on the human body 3D feature identity information base" in this embodiment may be added to the second embodiment, when performing human body recognition training, the RGBD human body image sequence of the person to be tested is acquired, the 3D spatial distribution feature information and the human body dynamic feature information of the human body feature points are acquired according to the RGBD human body image sequence, the 3D spatial distribution feature information and the human body dynamic feature information of the human body feature points are compared with the 3D spatial distribution feature information and the human body dynamic feature information of the human body feature points in the human body 3D feature identity information base, and the RGBD human body image sequence of the person to be tested, the 3D spatial distribution feature information of the corresponding human body feature points, the human body dynamic feature information, and the identity information are stored in the human body 3D feature identity information base.
As shown in fig. 10, fig. 10 is a flowchart illustrating a method for establishing a human body 3D feature identity information base according to a fourth embodiment of the present invention.
S41: an RGBD body atlas and an RGBD face atlas of an individual are acquired, wherein identity information of the individual is known.
S42: acquiring 3D space distribution characteristic information of the human body characteristic points of the person through an RGBD human body atlas; and acquiring the 3D space distribution characteristic information of the human face characteristic points of the individual through the RGBD human face atlas.
In step S42, the method for obtaining the 3D spatial distribution feature information of the human body feature points may be the same as any of the embodiments described above, and is not described herein again.
The method for acquiring the 3D space distribution characteristic information of the human face characteristic points comprises the following steps:
(1) and collecting human face characteristic points through the RGBD human face image.
After obtaining the RGBD face image, collecting feature points by collecting face elements on the RGBD face image, wherein the face elements include: one or more of eyebrows, eyes, nose, mouth, cheeks, and chin.
The feature points may be obtained by various methods, for example, by manually marking feature points of five sense organs such as eyes and a nose, cheeks, a mandible, edges thereof, and the like of the human face, or by determining the feature points of the human face by a human face feature point marking method compatible with RGB (2D).
For example, the method for locating the key feature points of the human face comprises the following steps: selecting 9 characteristic points of the human face, wherein the distribution of the characteristic points has angle invariance and is respectively 2 eyeball central points, 4 eye corner points, the middle points of two nostrils and 2 mouth corner points. On the basis of the above-mentioned identification method, the organ characteristics of human face and extended positions of other characteristic points can be easily obtained, and can be used for further identification algorithm.
When the human face features are extracted, because local edge information cannot be effectively organized, the traditional edge detection operator cannot reliably extract the features of the human face (the outlines of eyes or mouth), but from the human visual characteristics, the features of edges and angular points are fully utilized to position the key feature points of the human face, so that the reliability of the human face feature extraction is greatly improved.
Wherein the Susan operator is selected for extracting the edge and corner features of the local area. According to the characteristics of the Susan operator, the method can be used for detecting edges and extracting corners. Therefore, compared with edge detection operators such as Sobel and Canny, the Susan operator is more suitable for extracting features such as human faces, eyes and mouths and the like, and especially for automatically positioning eye corner points and mouth corner points.
The following is an introduction to the Susan operator:
traversing the image by using a circular template, if the difference between the gray value of any other pixel in the template and the gray value of the pixel (kernel) in the center of the template is less than a certain threshold, the pixel is considered to have the same (or similar) gray value with the kernel, and the region composed of pixels meeting the condition is called a kernel value similarity region (USAN). Associating each pixel in the image with a local area having similar gray values is the basis of the SUSAN criterion.
During detection, a circular template is used for scanning the whole image, the gray values of each pixel and the central pixel in the template are compared, and a threshold value is given to judge whether the pixel belongs to a USAN region, wherein the following formula is as follows:
Figure BDA0001152277580000211
in the formula, c (r, r)0) Is the discriminant function of pixels in the template that belong to the USAN region, I (r)0) Is the gray value of the center pixel (kernel) of the template, i (r) is the gray value of any other pixel in the template, and t is the gray difference threshold. Which affects the number of detected corner points. t is reduced and more subtle changes in the image are obtained, giving a relatively large number of detections. The threshold t must be determined based on factors such as the contrast and noise of the image. The USAN region size at a point in the image can be represented by the following equation:
Figure BDA0001152277580000212
wherein g is a geometric threshold, which affects the shape of the detected corner points, and the smaller g is, the sharper the detected corner points are. (1) the determination threshold g for t, g determines the maximum value of the USAN region for the output corner, i.e. a point is determined as a corner as long as the pixels in the image have a USAN region smaller than g. The size of g not only determines how many corners can be extracted from the image, but also, as previously mentioned, determines how sharp the corner is detected. So g can take a constant value once the quality (sharpness) of the desired corner point is determined. The threshold t represents the minimum contrast of the corner points that can be detected and is also the maximum tolerance for negligible noise. It mainly determines the number of features that can be extracted, the smaller t, the more features that can be extracted from an image with lower contrast, and the more features that are extracted. Therefore, for images of different contrast and noise conditions, different values of t should be taken. The SUSAN operator has the outstanding advantages of insensitivity to local noise and strong noise immunity. This is because it does not rely on the results of earlier image segmentation and avoids gradient calculations, and in addition, the USAN region is accumulated from pixels in the template with similar gray values as the central pixel of the template, which is in fact an integration process that has a good suppression of gaussian noise.
(1) Automatic positioning of the eyeball and the canthus. In the automatic positioning process of the eyeballs and the canthus, firstly, a normalized template matching method is adopted to initially position the human face. The approximate area of the face is determined in the whole face image. The general human eye positioning algorithm positions according to the valley point property of the eyes, and here, a method of combining the valley point search and the direction projection and the symmetry of the eyeballs is adopted, and the accuracy of the eye positioning can be improved by utilizing the correlation between the two eyes. Integral projection of a gradient map is carried out on the upper left part and the upper right part of the face area, a histogram of the integral projection is normalized, the approximate position of the eyes in the y direction is determined according to valley points of horizontal projection, then x is changed in a large range, valley points in the area are searched, and the detected points are used as eyeball center points of two eyes.
On the basis of obtaining the positions of two eyeballs, processing an eye region, firstly determining a threshold value by adopting a self-adaptive binarization method to obtain an automatic binarization image of the eye region, and then combining with a Susan operator, and accurately positioning inner and outer eye angular points in the eye region by utilizing an algorithm of edge and angular point detection.
The edge image of the eye region obtained by the algorithm is subjected to corner point extraction on the edge curve in the image on the basis, so that accurate positions of the inner and outer eye corner points of the two eyes can be obtained.
(2) Automatic positioning of nose area feature points. And determining the key characteristic point of the nose area of the human face as the midpoint of the central connecting line of the two nostrils, namely the center point of the nose lip. The position of the central point of the nose lip of the human face is relatively stable, and the central point of the nose lip of the human face can also play a role of a reference point when the normalization preprocessing is carried out on the human face image.
And determining the positions of the two nostrils by adopting a regional gray scale integral projection method based on the found positions of the two eyeballs.
Firstly, a strip-shaped area with the width of pupils of two eyes is intercepted, Y-direction integral projection is carried out, then a projection curve is analyzed, the position of a first valley point is found by searching downwards from the height of a Y coordinate of an eyeball position along the projection curve (a proper peak-valley delta value is selected by adjustment, and burr influence possibly caused by face scars or glasses and the like in the middle is ignored), the valley point is taken as a Y coordinate reference point of a nostril position, secondly, an area with the width of X coordinates of two eyeballs and the height of delta pixels above and below the Y coordinate of the nostril (for example, delta is selected as nostril [ Y coordinate-eyeball Y coordinate ] × 0.06.06) is taken as the height to carry out X-direction integral projection, then the projection curve is analyzed, the X coordinate of the midpoint of the pupils of the two eyes is taken as a central point, searching is respectively carried out towards the left side and the right side, the found first valley point is taken as the X coordinate of the central point of the left nostril and the right nostril, the midpoint is calculated as the midpoint of the nostril.
(3) Automatic positioning of the corners of the mouth. Because the different facial expressions may cause great change of the mouth shape, and the mouth area is easily interfered by the factors such as beard and the like, the accuracy of mouth feature point extraction has great influence on recognition. Because the positions of the mouth corner points are relatively slightly changed under the influence of expressions and the like, and the positions of the corner points are accurate, the important characteristic points of the mouth region are adopted as the positioning modes of the two mouth corner points.
On the basis of determining the characteristic points of the binocular region and the nasal region, firstly, determining a first valley point of a Y-coordinate projection curve below a nostril (in the same way, burr influence caused by beard, nevus and other factors needs to be eliminated through a proper peak-valley delta value) as a Y-coordinate position of a mouth by using a region gray scale integral projection method; then selecting a mouth region, and processing the region image by using a Susan operator to obtain a mouth edge image; and finally, extracting angular points to obtain the accurate positions of the two mouth corners.
(2) And establishing a face color 3D grid according to the face characteristic points.
(3) And measuring the characteristic value of the face characteristic point according to the face color 3D grid and calculating the 3D space distribution characteristic information of the face characteristic point.
Specifically, the color information may measure a relevant feature value for a feature point of the face feature, where the feature value includes one or more of a position, a distance, a shape, a size, an angle, an arc, and a curvature of the face feature on the 2D plane, and further includes a measure of color, brightness, texture, and the like. For example, the central pixel point of the iris extends to the periphery, so as to obtain all the pixel positions of the eye, the shape of the eye, the inclination radian of the eye corner, the color of the eye and the like.
By combining the color information and the depth information, the connection relationship between the feature points can be calculated, and the connection relationship can be the topological connection relationship and the space geometric distance between the feature points, or can also be the dynamic connection relationship information of various combinations of the feature points, and the like.
According to the measurement and calculation of the face color 3D grid, local information including plane information of each element of the face and the spatial position relation of the feature points on each element and overall information of the spatial position relation between each element can be obtained. The local information and the overall information respectively reflect the information and the structural relation hidden on the human face RGBD image from the local part and the overall part.
Through the analysis of the characteristic values and the connection relation, the three-dimensional face shape information can be obtained, so that the 3D space distribution characteristic information of each characteristic point of the face is obtained, and the face can be identified through the 3D space distribution characteristic information of the face in the later stage of face identification.
For example, finite element analysis methods can be used to analyze the characteristic values, topological connection relations between characteristic points and spatial geometric distances to obtain 3D spatial distribution characteristic information of the human face characteristic points.
The modeling method of the human face curve surface can be described by a mathematical model as follows:
the obtained deformation curve
Figure BDA0001152277580000241
Or curved surface
Figure BDA0001152277580000242
Is a solution to the extreme problem
Figure BDA0001152277580000243
Wherein the content of the first and second substances,
Figure BDA0001152277580000244
the energy functional function of the curved surface reflects the deformation characteristic of the curved surface to a certain extent and endows the curved surface with physical characteristics. f1, f2, f3, f4 are functions relating to the variables in (-) and,
Figure BDA0001152277580000245
is the boundary of the parameter definition domain, and Γ' is the curve within the curved parameter domain, (μ0,v0) The method is characterized in that the method is a parameter value in a parameter domain, the condition (1) is a boundary interpolation constraint, the condition (2) is a continuity constraint at a boundary, the condition (3) is a constraint of a characteristic line in a curved surface, and the condition (4) is a constraint of an inner point of the curved surface. In application, an energy functional
Figure BDA0001152277580000246
Taking the following form:
the curve:
Figure BDA0001152277580000251
surface bending:
wherein α, β, γ respectively represent the stretch, play and distortion coefficients of the curve, and α ij and β ij respectively represent the local stretch and play coefficients of the curve in the μ, v direction at (μ, v).
It can be seen from the mathematical model that the deformation curve surface modeling method treats various constraints in a same and coordinated way, not only satisfies the local control, but also ensures the whole wide and smooth. Using the variational principle, solving the above-mentioned extremum problem can be converted to solving the following equations:
Figure BDA0001152277580000253
where δ represents the first order variation. Equation (5) is a differential equation, which is a numerical solution because it is complicated and difficult to find an accurate analysis result. For example, finite element methods are used for solving.
The finite element method can be considered as that firstly a proper interpolation form is selected according to the requirement, and then the combination parameters are solved, so that the obtained solution is not only a continuous form, but also the grid generated by pretreatment lays a foundation for finite element analysis.
In the recognition stage, the similarity measure between the unknown face image and the known face template is given by:
Figure BDA0001152277580000254
in the formula: ciXjRespectively the characteristics of the face to be recognized and the characteristics of the face in the face library, i1,i2,j1,j2,k1,k2Is a 3D mesh vertex feature. The first term in the formula is to select the corresponding local feature X in the two vector fieldsjAnd CiThe second term is to calculate the local position relationship and the matching order, so that the best match is the one with the minimum energy function.
In addition, a wavelet transformation texture analysis method can be adopted to analyze the dynamic connection relation between the characteristic values and the characteristic points so as to obtain the 3D space distribution characteristic information of the characteristic points.
Specifically, the dynamic connection relationship is a dynamic connection relationship of various combinations of feature points. The wavelet transform is a local transform of time and frequency, has the characteristics of multi-resolution analysis, and has the capability of characterizing local characteristics of signals in a time domain and a frequency domain. In the embodiment, through wavelet transformation texture analysis, by extracting, classifying and analyzing texture features and combining human face feature values and dynamic connection relation information, specifically including color information and depth information, stereoscopic human face shape information is finally obtained, and finally human face shape information with invariance under the condition of human face subtle expression change is analyzed and extracted from the human face shape information to encode human face shape model parameters, wherein the model parameters can be used as geometric features of a human face, so that 3D space distribution feature information of human face feature points is obtained.
In the method for acquiring 3D feature information of a human face provided in some other embodiments, the method for acquiring 2D feature information of a human face is also compatible with the acquisition of 2D feature information of a human face, and the method for acquiring 2D feature information of a human face may be various methods that are conventional in the art. In the embodiments, the 3D feature information of the face is obtained, and the 2D feature information of the face is also obtained, so that the 3D and 2D recognition of the face is performed at the same time, and the accuracy of the face recognition is further improved.
For example, the basis of a three-dimensional wavelet transform is as follows:
Figure BDA0001152277580000261
wherein the content of the first and second substances,
Figure BDA0001152277580000262
AJ1as a function f (x, y, z) to space V3 J1The projection operator of (a) is determined,
Qnis Hx,Hy,HzGx,Gy,GzA combination of (1);
let matrix H be (H)m,k),G=(Gm,k) Wherein, in the step (A),
Figure BDA0001152277580000263
Figure BDA0001152277580000264
Hx,Hy,Hzrespectively shows the effect of H on the three-dimensional signals x, y, z and Gx,Gy,GzIndicating that G acts in the x, y, z direction of the three-dimensional signal, respectively.
In the identification stage, after wavelet transformation of an unknown face image, a low-frequency low-resolution sub-image of the unknown face image is taken to be mapped to a face space, a characteristic coefficient is obtained, the distance between the characteristic coefficient to be classified and the characteristic coefficient of each person can be compared by using Euclidean distance, and a PCA algorithm is combined according to the formula:
Figure BDA0001152277580000271
in the formula, K is the person most matched with the unknown face, N is the number of people in the database, Y is the m-dimensional vector obtained by mapping the unknown face to the subspace formed by the characteristic faces, and Y is the m-dimensional vectorkAnd mapping the known human faces in the database to m-dimensional vectors obtained on a subspace formed by the characteristic faces.
It is understood that, in another embodiment, a 3D face recognition method based on two-dimensional wavelet features may also be used for recognition, where two-dimensional wavelet feature extraction is first required, and the two-dimensional wavelet basis function g (x, y) is defined as
Figure BDA0001152277580000272
gmn(x,y)=a-nmg(x′,y′)a>1,m,n∈Z
Where σ is the size of the Gaussian window, a self-similar filter function can be passed through the function gmn(x,y) G (x, y) is appropriately expanded and rotated. Based on the above functions, the wavelet characteristics for image I (x, y) can be defined as
Figure BDA0001152277580000273
The two-dimensional wavelet extraction algorithm of the face image comprises the following implementation steps:
(1) wavelet representation about human face is obtained through wavelet analysis, and corresponding features in the original image I (x, y) are converted into wavelet feature vectors F (F ∈ R)m)。
(2) Using a small exponential polynomial (FPP) model k (x, y) ═ x.yd(d is more than 0 and less than 1) enabling m-dimensional wavelet feature space RmProjection into a higher n-dimensional space RnIn (1).
(3) Based on the kernel-linear decision analysis algorithm (KFDA), in RnBuilding an inter-class matrix S in spacebAnd intra-class matrix Sw
Figure BDA0001152277580000274
Calculating SwOf the orthonormal eigenvector α1,α2,…,αn
(4) Extracting the significant distinguishing feature vector of the face image, and changing P1 to (α)1,α2,…,αq) Wherein, α1,α2,…,αqIs SwCorresponding q eigenvectors with positive eigenvalues, q rank (S)w). Computing
Figure BDA0001152277580000275
Eigenvectors β corresponding to the L largest eigenvalues1,β2,…,βL(L is less than or equal to c-1), wherein,
Figure BDA0001152277580000281
c is the number of face classifications. Salient feature vector, fregular=BTP1 Ty wherein, v ∈ Rn;B=(β1,β2,…,βl)。
(5) And extracting the distinguishing feature vector which is not obvious in the face image. Computing
Figure BDA0001152277580000282
Eigenvector gamma corresponding to one maximum eigenvalue1,γ2,…,γL(L is less than or equal to c-1). Let P2=(αq+1,αq+2,…,αm) The feature vector is not distinguished
Figure BDA0001152277580000283
The steps included in the 3D face recognition stage are as follows:
(1) the front face is detected, and key face characteristic points, such as contour characteristic points of the face, left and right eyes, mouth and nose, and the like, in a front face and a face image are positioned.
(2) And reconstructing a three-dimensional face model through the extracted two-dimensional Gabor characteristic vector and a common 3D face database. To reconstruct a three-dimensional face model, a three-dimensional face database of human faces is used, including 100 detected face images. Each face model in the database has approximately 70000 vertices. Determining a feature transformation matrix P, wherein in the original three-dimensional face recognition method, the matrix is usually a subspace analysis projection matrix obtained by a subspace analysis method and consists of feature vectors of covariance matrices of samples corresponding to the first m maximum eigenvalues. And (3) the extracted wavelet discrimination feature vector corresponds to the feature vectors of m maximum feature values to form a main feature transformation matrix P', and the feature transformation matrix has stronger robustness on factors such as illumination, posture, expression and the like than the original feature matrix P, namely the represented features are more accurate and stable.
(3) And processing the newly generated face model by adopting a template matching and linear discriminant analysis (FLDA) method, extracting intra-class difference and inter-class difference of the model, and further optimizing the final recognition result.
S43: and identifying the identity information to the 3D space distribution characteristic information of the human face characteristic points and the 3D space distribution characteristic information of the human body characteristic points to obtain personal information, and storing the personal information to form a human body 3D characteristic identity information base.
The human body 3D feature information obtained in this embodiment includes the 3D spatial distribution feature information of the whole human body feature points and the 3D spatial distribution feature information of the local human face feature points, and can be recognized from the whole and local features during human body recognition, so that attribute items of human body recognition are increased, and accuracy of human body recognition is improved.
In other embodiments, 2D information such as human skin color and texture obtained from an RGB human face image may be combined with 3D spatial distribution feature information of human feature points and 3D spatial distribution feature information of human face feature points, so as to further increase recognition attribute items and improve recognition accuracy.
Referring to fig. 11, fig. 11 is a schematic structural diagram of an apparatus for establishing a human body 3D feature identity information base according to a first embodiment of the present invention.
The device for establishing the human body 3D feature identity information base of the present embodiment includes a human body collecting module 10, a human body information acquiring module 11 and an information base module 12.
In particular, the body acquisition module 10 is used for acquiring an RGBD body atlas of an individual, which is
The identity information of the individual is known.
The human body information acquisition module 11 is connected to the human body acquisition module 10, and is configured to acquire 3D spatial distribution feature information of the human body feature points of the person through an RGBD human body atlas.
The information base module 12 includes a storage module 121, and the storage module 121 is connected to the human body information obtaining module 11, and is configured to identify the personal identity information to the 3D spatial distribution feature information of the human body feature point corresponding to the person to obtain the personal information, and store the personal information to form a human body 3D feature identity information base.
Referring to fig. 12, fig. 12 is a schematic structural diagram of an apparatus for establishing a human body 3D feature identity information base according to a second embodiment of the present invention.
The device for establishing the human body 3D feature identity information base of this embodiment includes a human body acquisition module 20, a human body information acquisition module 21, an information base module 22, and a dynamic information acquisition module 23.
The human body acquisition module 20 is configured to acquire an RGBD human body image sequence of a person, where identity information of the person is known.
The human body information obtaining module 21 is connected to the human body collecting module 20, and is configured to obtain 3D spatial distribution feature information of the human body feature point of the person through an RGBD human body atlas.
The dynamic information obtaining module 23 is connected to the human body collecting module 20, and is configured to obtain human body dynamic characteristic information according to the RGBD human body image sequence.
The information base module 22 includes a storage module 221 and a management module 222.
The storage module 221 is connected to the human body information obtaining module 21 and the dynamic information obtaining module 23, and is configured to identify the identity information of the individual to the human body 3D feature information and the human body dynamic feature information corresponding to the individual to obtain personal information, and store the personal information to form a human body 3D feature identity information base.
The management module 222 is connected to the storage module 221, and the management module 222 is configured to perform hierarchical classification management on the identity information. The hierarchy includes a personal attribute hierarchy and a group attribute hierarchy.
Referring to fig. 13, fig. 13 is a schematic structural diagram of an apparatus for establishing a human body 3D feature identity information base according to a third embodiment of the present invention.
The device for establishing the human body 3D feature identity information base of this embodiment includes a human body acquisition module 30, a human body information acquisition module 31, an information base module 32, and a training module 33.
Specifically, the human body collection module 30 is used for collecting an RGBD human body atlas of an individual, wherein identity information of the individual is known.
The human body information obtaining module 31 is connected to the human body collecting module 30, and is configured to obtain 3D spatial distribution feature information of the human body feature point of the person through the RGBD human body atlas.
The information base module 32 includes a storage module 321, and the storage module 321 is connected to the human body information obtaining module 31, and is configured to identify the identity information of the individual to the 3D space distribution feature information of the human body feature point corresponding to the individual to obtain the individual information, and store the individual information to form a human body 3D feature identity information base.
The training module 33 is connected with the human body acquisition module 30, the human body information acquisition module 31 and the information base module 32, and is used for performing human body identification training on the human body 3D characteristic identity information base.
Specifically, the training module 33 includes a control module 331 and a comparison module 332, the control module 331 is connected to the human body acquisition module 31 through the human body acquisition module 30, and is configured to control the human body acquisition module to acquire an RGBD human body atlas of the human body of the test person with known identity information, and control the human body acquisition module to acquire 3D spatial distribution feature information of the human body feature point of the test person from the RGBD human body atlas of the test person. The test persons comprise individuals with personal information stored in the human body 3D characteristic identity information base and individuals without personal information stored in the human body 3D characteristic identity information base.
The comparison module 332 is connected to the control module 331 and the storage module 321, and configured to compare the acquired 3D spatial distribution feature information of the human body feature points of the test person with the 3D spatial distribution feature information of the human body feature points in the human body 3D feature identity information base.
The storage module 321 is further configured to, when the comparison result is correct, store the RGBD human body atlas of the test person, the 3D spatial distribution feature information of the corresponding human body feature points, and the identity information into a human body 3D feature identity information base.
Referring to fig. 14, fig. 14 is a schematic structural diagram of an apparatus for establishing a human body 3D feature identity information base according to a fourth embodiment of the present invention.
The device for establishing the human body 3D feature identity information base of the present embodiment includes a human body acquisition module 40, a human body information acquisition module 41, an information base module 42, a human face acquisition module 43, and a human face information acquisition module 44.
Specifically, the human body collecting module 40 is used for collecting an RGBD human body atlas of an individual, wherein identity information of the individual is known.
The human body information obtaining module 41 is connected to the human body collecting module 40, and is configured to obtain 3D spatial distribution feature information of the human body feature point of the person through the RGBD human body atlas.
The face acquisition module 43 is used to acquire an RGBD face atlas of the individual.
The face information acquiring module 44 is connected to the face collecting module 43, and is configured to acquire 3D spatial distribution feature information of the face feature point of the person through an RGBD face atlas.
The information base module 42 includes a storage module 421, and the storage module 421 is connected to the human body information obtaining module 41 and the human face information obtaining module 44, and is configured to identify the identity information to the 3D spatial distribution feature information of the human face feature points and the 3D spatial distribution feature information of the human body feature points to obtain personal information, and store the personal information to form a human body 3D feature identity information base.
Referring to fig. 15, fig. 15 is a schematic structural diagram of an entity device of an apparatus for establishing a human body 3D feature identity information base according to a fourth embodiment of the present invention. The apparatus of this embodiment can execute the steps in the method, and for related content, please refer to the detailed description in the method, which is not described herein again.
The intelligent electronic device comprises a processor 51, a memory 52 coupled to the processor 51.
The memory 52 is used for storing one or more of an operating system, a set program, an RGBD human body image sequence, an RGBD human face image, 3D spatial distribution characteristic information of human body characteristic points, 3D spatial distribution characteristic information of human face characteristic points, human body dynamic characteristic information … …, and the like.
The processor 51 is configured to acquire an RGBD body atlas of an individual, where identity information of the individual is known; acquiring 3D space distribution characteristic information of the human body characteristic points of the person through an RGBD human body atlas; and identifying the identity information of the person to the 3D space distribution characteristic information of the human body characteristic point corresponding to the person to obtain personal information, and storing the personal information to form a human body 3D characteristic identity information base.
The processor 51 is also used for the human body 3D characteristic identity information base to carry out hierarchical classification management on the identity information. Wherein the hierarchy includes a personal attribute hierarchy and a group attribute hierarchy.
The processor 51 is further configured to obtain an RGBD human image sequence; and acquiring human body dynamic characteristic information according to the RGBD human body image sequence.
The processor 51 is also used for performing human body recognition training on the human body 3D characteristic identity information base.
The processor 51 is further configured to collect an RGBD human body atlas of the test person with known identity information; acquiring 3D space distribution characteristic information of human body characteristic points of a test person from an RGBD human body atlas of the test person; and comparing the acquired 3D space distribution characteristic information of the human body characteristic points of the test person with the 3D space distribution characteristic information of the human body characteristic points in the human body 3D characteristic identity information base.
The processor 51 is further configured to acquire an RGBD face atlas of the individual; acquiring 3D space distribution characteristic information of the human face characteristic points of the individual through an RGBD human face atlas; and saving the 3D space distribution characteristic information of the human face characteristic points and the 3D space distribution characteristic information of the human body characteristic points marked with the identity information as personal information to form a human body 3D characteristic identity information base. In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules or units is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be substantially or partially implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In conclusion, the invention is not influenced by different seasons, clothes of people, environmental illumination change and the like, thereby improving the accuracy of human body identification.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (16)

1. A method for establishing a human body 3D characteristic identity information base is characterized by comprising the following steps:
collecting an individual RGBD human body atlas, wherein the identity information of the individual is known;
acquiring 3D space distribution characteristic information of the human body characteristic points of the person through the RGBD human body atlas;
the step of obtaining the 3D space distribution characteristic information of the human body characteristic point of the person through the RGBD human body atlas comprises the following steps:
collecting human body characteristic points through an RGBD human body image;
establishing a human body 3D grid according to the human body feature points;
measuring the characteristic value of the human body characteristic points according to the human body 3D grid and calculating the 3D space distribution characteristic information of the human body characteristic points;
wherein the step of measuring the characteristic value of the human body characteristic point according to the human body 3D grid and calculating the 3D space distribution characteristic information of the human body characteristic point comprises the following steps:
the spatial position information of each human body feature point can be calculated through the human body 3D grid;
calculating the topological relation among the human body feature points to obtain three-dimensional human body shape information so as to obtain 3D space distribution feature information of the human body feature points;
the 3D space distribution characteristic information of the human body characteristic points comprises color information and depth information;
the 3D space distribution characteristic information of the human body characteristic points further comprises human body dynamic characteristic information;
and identifying the identity information of the person to the 3D space distribution feature information of the human body feature point corresponding to the person to obtain personal information, and storing the personal information to form a human body 3D feature identity information base.
2. The method according to claim 1, wherein the identity information is hierarchically classified and managed by the human body 3D feature identity information base.
3. The method of claim 2, wherein the hierarchy includes a personal attribute hierarchy and a group attribute hierarchy.
4. The method of claim 3, wherein the RGBD human atlas is an RGBD human image sequence;
the step of obtaining the 3D spatial distribution characteristic information of the human body feature points of the person through the RGBD human body atlas further includes: acquiring human body dynamic characteristic information according to the RGBD human body image sequence;
the steps of identifying the identity information of the individual to the human body 3D characteristic information corresponding to the individual to obtain personal information, and storing the personal information to form a human body 3D characteristic identity information base are as follows: and identifying the identity information of the individual to the human body 3D characteristic information and the human body dynamic characteristic information corresponding to the individual to obtain personal information, and storing the personal information to form a human body 3D characteristic identity information base.
5. The method according to claim 1, wherein after the steps of identifying the identity information of the individual to the human 3D characteristic information corresponding to the individual to obtain the individual information and saving the individual information to form the human 3D characteristic identity information base, further comprising:
and carrying out human body recognition training on the human body 3D characteristic identity information base.
6. The method of claim 5, wherein the step of performing human recognition training on the human 3D characteristic identity information base comprises:
collecting an RGBD human body atlas of a tester with known identity information;
acquiring 3D space distribution characteristic information of the human body characteristic points of the test person from the RGBD human body atlas of the test person;
comparing the acquired 3D space distribution characteristic information of the human body characteristic points of the test person with the 3D space distribution characteristic information of the human body characteristic points in the human body 3D characteristic identity information base;
and if the comparison result is correct, storing the RGBD human body atlas of the test person, the corresponding 3D space distribution characteristic information of the human body characteristic points and the identity information into the human body 3D characteristic identity information base.
7. The method according to claim 6, wherein the test person includes a person having personal information stored in the human body 3D characteristic identity information base and a person having no personal information stored in the human body 3D characteristic identity information base.
8. The method of claim 1, wherein the step of acquiring an RGBD body atlas of the individual further comprises: collecting the RGBD face atlas of the person;
the step of obtaining the 3D spatial distribution characteristic information of the human body feature points of the person through the RGBD human body atlas further includes: acquiring 3D space distribution characteristic information of the human face characteristic points of the person through the RGBD human face atlas;
the step of identifying the identity information of the individual to the 3D space distribution feature information of the human body feature point corresponding to the individual to obtain the individual information, and storing the individual information to form a human body 3D feature identity information base comprises the following steps: and identifying the identity information to the 3D space distribution characteristic information of the human face characteristic points and the 3D space distribution characteristic information of the human body characteristic points to obtain personal information, and storing the personal information to form a human body 3D characteristic identity information base.
9. An apparatus for building a human body 3D feature identity information base, comprising:
the system comprises a human body acquisition module, a human body analysis module and a human body analysis module, wherein the human body acquisition module is used for acquiring an individual RGBD human body atlas, and the identity information of the individual is known;
the human body information acquisition module is connected with the human body acquisition module and used for acquiring the 3D space distribution characteristic information of the human body characteristic points of the person through the RGBD human body atlas;
the step of acquiring the 3D space distribution characteristic information of the human body characteristic points of the person through the RGBD human body atlas comprises the following steps:
collecting human body characteristic points through an RGBD human body image;
establishing a human body 3D grid according to the human body feature points;
measuring the characteristic value of the human body characteristic points according to the human body 3D grid and calculating the 3D space distribution characteristic information of the human body characteristic points;
wherein the step of measuring the characteristic value of the human body characteristic point according to the human body 3D grid and calculating the 3D space distribution characteristic information of the human body characteristic point comprises the following steps:
the spatial position information of each human body feature point can be calculated through the human body 3D grid;
calculating the topological relation among the human body feature points to obtain three-dimensional human body shape information so as to obtain 3D space distribution feature information of the human body feature points;
the 3D space distribution characteristic information of the human body characteristic points comprises color information and depth information;
the 3D space distribution characteristic information of the human body characteristic points further comprises human body dynamic characteristic information;
and the information base module comprises a storage module, and the storage module is connected with the human body information acquisition module and is used for identifying the personal identity information to the 3D space distribution feature information of the human body feature points corresponding to the person to obtain the personal information and storing the personal information to form a human body 3D feature identity information base.
10. The device according to claim 9, wherein the information base module further comprises a management module, the management module is connected with the storage module, and the management module is configured to perform hierarchical classification management on the identity information.
11. The apparatus of claim 10, wherein the hierarchy comprises a personal attribute hierarchy and a group attribute hierarchy.
12. The apparatus of claim 11, wherein the RGBD human body atlas acquired by the human body acquisition module is an RGBD human body image sequence;
the device also comprises a dynamic information acquisition module which is connected with the human body acquisition module and used for acquiring human body dynamic characteristic information according to the RGBD human body image sequence;
the storage module is further connected with the dynamic information acquisition module and is used for identifying the identity information of the individual to the human body 3D characteristic information and the human body dynamic characteristic information corresponding to the individual to acquire individual information and storing the individual information to form a human body 3D characteristic identity information base.
13. The device according to claim 9, further comprising a training module, wherein the training module is connected to the human body acquisition module, the human body information acquisition module and the information base module, and is configured to perform human body recognition training on the human body 3D feature identity information base.
14. The apparatus of claim 13, wherein the training module comprises a control module and a comparison module;
the control module is connected with the human body acquisition module and the human body information acquisition module, and is used for controlling the human body acquisition module to acquire an RGBD human body atlas of a tester with known identity information, and controlling the human body information acquisition module to acquire 3D space distribution characteristic information of human body characteristic points of the tester from the RGBD human body atlas of the tester;
the comparison module is connected with the control module and is used for comparing the acquired 3D space distribution characteristic information of the human body characteristic points of the test person with the 3D space distribution characteristic information of the human body characteristic points in the human body 3D characteristic identity information base;
the storage module is further configured to store the RGBD human body atlas of the test person, the corresponding 3D spatial distribution feature information of the human body feature points, and the identity information into the human body 3D feature identity information base when the comparison result is correct.
15. The apparatus of claim 14, wherein the test person includes a person having personal information stored in the human body 3D feature identity information base and a person having no personal information stored in the human body 3D feature identity information base.
16. The apparatus of claim 9, further comprising:
the face acquisition module is used for acquiring the RGBD face atlas of the person;
the face information acquisition module is connected with the face acquisition module and used for acquiring the 3D space distribution characteristic information of the face characteristic points of the person through the RGBD face atlas;
the storage module is further connected with the face information acquisition module and is used for identifying the identity information to the 3D space distribution characteristic information of the face characteristic points and the 3D space distribution characteristic information of the human body characteristic points to obtain personal information, and storing the personal information to form a human body 3D characteristic identity information base.
CN201611002201.0A 2016-11-14 2016-11-14 Method and equipment for establishing human body 3D characteristic identity information base Active CN106599785B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611002201.0A CN106599785B (en) 2016-11-14 2016-11-14 Method and equipment for establishing human body 3D characteristic identity information base

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611002201.0A CN106599785B (en) 2016-11-14 2016-11-14 Method and equipment for establishing human body 3D characteristic identity information base

Publications (2)

Publication Number Publication Date
CN106599785A CN106599785A (en) 2017-04-26
CN106599785B true CN106599785B (en) 2020-06-30

Family

ID=58590298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611002201.0A Active CN106599785B (en) 2016-11-14 2016-11-14 Method and equipment for establishing human body 3D characteristic identity information base

Country Status (1)

Country Link
CN (1) CN106599785B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273804A (en) * 2017-05-18 2017-10-20 东北大学 Pedestrian recognition method based on SVMs and depth characteristic
CN107563359B (en) * 2017-09-29 2018-09-11 重庆市智权之路科技有限公司 Recognition of face temperature is carried out for dense population and analyzes generation method
CN108009483A (en) * 2017-11-28 2018-05-08 信利光电股份有限公司 A kind of image collecting device, method and intelligent identifying system
CN108416312B (en) * 2018-03-14 2019-04-26 天目爱视(北京)科技有限公司 A kind of biological characteristic 3D data identification method taken pictures based on visible light
CN108537236A (en) * 2018-04-04 2018-09-14 天目爱视(北京)科技有限公司 A kind of polyphaser data control system for identifying
CN109492624A (en) * 2018-12-29 2019-03-19 北京灵汐科技有限公司 The training method and its device of a kind of face identification method, Feature Selection Model
CN111105881B (en) * 2019-12-26 2022-02-01 昆山杜克大学 Database system for 3D measurement of human phenotype
CN113254491A (en) * 2021-06-01 2021-08-13 平安科技(深圳)有限公司 Information recommendation method and device, computer equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102164113A (en) * 2010-02-22 2011-08-24 深圳市联通万达科技有限公司 Face recognition login method and system
CN103235943A (en) * 2013-05-13 2013-08-07 苏州福丰科技有限公司 Principal component analysis-based (PCA-based) three-dimensional (3D) face recognition system
CN104573634A (en) * 2014-12-16 2015-04-29 苏州福丰科技有限公司 Three-dimensional face recognition method
KR101556992B1 (en) * 2014-03-13 2015-10-05 손우람 3d scanning system using facial plastic surgery simulation
CN105184280A (en) * 2015-10-10 2015-12-23 东方网力科技股份有限公司 Human body identity identification method and apparatus
CN105447466A (en) * 2015-12-01 2016-03-30 深圳市图灵机器人有限公司 Kinect sensor based identity comprehensive identification method
CN105786016A (en) * 2016-03-31 2016-07-20 深圳奥比中光科技有限公司 Unmanned plane and RGBD image processing method
CN106023363A (en) * 2016-05-12 2016-10-12 重庆佐鸣科技有限公司 Identity verification method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040037450A1 (en) * 2002-08-22 2004-02-26 Bradski Gary R. Method, apparatus and system for using computer vision to identify facial characteristics
CN104091159A (en) * 2014-07-14 2014-10-08 无锡市合鑫川自动化设备有限公司 Human face identification method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102164113A (en) * 2010-02-22 2011-08-24 深圳市联通万达科技有限公司 Face recognition login method and system
CN103235943A (en) * 2013-05-13 2013-08-07 苏州福丰科技有限公司 Principal component analysis-based (PCA-based) three-dimensional (3D) face recognition system
KR101556992B1 (en) * 2014-03-13 2015-10-05 손우람 3d scanning system using facial plastic surgery simulation
CN104573634A (en) * 2014-12-16 2015-04-29 苏州福丰科技有限公司 Three-dimensional face recognition method
CN105184280A (en) * 2015-10-10 2015-12-23 东方网力科技股份有限公司 Human body identity identification method and apparatus
CN105447466A (en) * 2015-12-01 2016-03-30 深圳市图灵机器人有限公司 Kinect sensor based identity comprehensive identification method
CN105786016A (en) * 2016-03-31 2016-07-20 深圳奥比中光科技有限公司 Unmanned plane and RGBD image processing method
CN106023363A (en) * 2016-05-12 2016-10-12 重庆佐鸣科技有限公司 Identity verification method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Fusion of RGB and Depth Images for Robust Face Recognition using Close-Range 3D Camera;Srinivas Kishan Anapu;《International journal of innovative research in computer and communication engineering》;20141231;第2卷(第7期);全文 *
基于RGB-D数据的人体检测与跟踪;戴萧何;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160815(第08期);全文 *
基于粒子滤波器算法的特征标记点匹配;李晓捷;《计算机工程》;20160511;第42卷(第2期);全文 *

Also Published As

Publication number Publication date
CN106599785A (en) 2017-04-26

Similar Documents

Publication Publication Date Title
CN106599785B (en) Method and equipment for establishing human body 3D characteristic identity information base
CN106778468B (en) 3D face identification method and equipment
Malassiotis et al. Personal authentication using 3-D finger geometry
CN108182397B (en) Multi-pose multi-scale human face verification method
KR102554391B1 (en) Iris recognition based user authentication apparatus and method thereof
CN106778474A (en) 3D human body recognition methods and equipment
KR20050025927A (en) The pupil detection method and shape descriptor extraction method for a iris recognition, iris feature extraction apparatus and method, and iris recognition system and method using its
CN104200200A (en) System and method for realizing gait recognition by virtue of fusion of depth information and gray-scale information
CN106778489A (en) The method for building up and equipment of face 3D characteristic identity information banks
Khan et al. Multiple human detection in depth images
CN106611158A (en) Method and equipment for obtaining human body 3D characteristic information
Sun et al. Human recognition for following robots with a Kinect sensor
Russ et al. 3D facial recognition: a quantitative analysis
More et al. Gait recognition by cross wavelet transform and graph model
CN106778491B (en) The acquisition methods and equipment of face 3D characteristic information
Bengoechea et al. Evaluation of accurate eye corner detection methods for gaze estimation
Gawali et al. 3d face recognition using geodesic facial curves to handle expression, occlusion and pose variations
CN112381047A (en) Method for enhancing and identifying facial expression image
Lee et al. Robust iris recognition baseline for the grand challenge
Juang et al. Human posture classification using interpretable 3-D fuzzy body voxel features and hierarchical fuzzy classifiers
Ming et al. A unified 3D face authentication framework based on robust local mesh SIFT feature
Ahdid et al. A survey on facial feature points detection techniques and approaches
Kurniawan et al. A review on 2D ear recognition
Gharghabi et al. Person recognition based on face and body information for domestic service robots
Tandon et al. An efficient age-invariant face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 518057 Guangdong city of Shenzhen province Nanshan District Hing Road three No. 8 China University of Geosciences research base in building A808

Patentee after: Obi Zhongguang Technology Group Co., Ltd

Address before: 518057 Guangdong city of Shenzhen province Nanshan District Hing Road three No. 8 China University of Geosciences research base in building A808

Patentee before: SHENZHEN ORBBEC Co.,Ltd.

CP01 Change in the name or title of a patent holder