CN111967425A - Self-service face recognition service device - Google Patents
Self-service face recognition service device Download PDFInfo
- Publication number
- CN111967425A CN111967425A CN202010880942.9A CN202010880942A CN111967425A CN 111967425 A CN111967425 A CN 111967425A CN 202010880942 A CN202010880942 A CN 202010880942A CN 111967425 A CN111967425 A CN 111967425A
- Authority
- CN
- China
- Prior art keywords
- user
- face
- module
- face recognition
- faces
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 claims abstract description 114
- 238000012795 verification Methods 0.000 claims description 36
- 238000000034 method Methods 0.000 abstract description 25
- 230000000875 corresponding effect Effects 0.000 description 29
- 238000004364 calculation method Methods 0.000 description 11
- 230000000694 effects Effects 0.000 description 4
- 210000000746 body region Anatomy 0.000 description 3
- 230000002596 correlated effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Library & Information Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
A self-service face recognition service device comprises a user information detection module, an image acquisition module, a face recognition module, a motion detection module and a user data module, wherein the user information detection module is used for acquiring a user ID, detecting whether the user ID exists in an own server or not, acquiring a user file related to the user ID, detecting whether the user file comprises a user photo or not, and enabling the image acquisition module when detecting that the user photo is not contained; the image acquisition module is used for inputting images, and the face recognition module is used for carrying out face detection operation on faces in the input images. By the method, the user ID can be matched with the existing database after being acquired, the user ID can be quickly acquired when the user photo does not exist, the identification object can be found out through preferential matching when the user photo exists, and the flexibility of the scheme is improved.
Description
Technical Field
The invention relates to the field of automatic image detection, in particular to a detection and identification service optimization method in a multi-user scene.
Background
In the existing face recognition technology, such as the technical solutions of application numbers 2017111892326 and 2018116155405, simultaneous recognition of multiple persons can be achieved. In a multi-person scene, a non-ideal situation in which the image pickup apparatus cannot acquire only one person easily occurs. When the image acquired by the camera equipment is identified, a plurality of people are likely to queue up, and the face of the back-row person is close to the face of the front-row person, so that the correct identification cannot be realized.
Disclosure of Invention
Therefore, an intelligent detection method capable of being used in a multi-person scene needs to be provided, so that the problem that in the prior art, the face recognition of the multi-person scene is not accurate enough is solved;
in order to achieve the above object, the inventor provides a self-service face recognition service device, which includes a user information detection module, an image acquisition module, a face recognition module, a motion detection module, and a user data module, wherein the user information detection module is configured to acquire a user ID, detect whether the user ID exists in a self-owned server, acquire a user profile related to the user ID, and further detect whether the user profile includes a user photo, and enable the image acquisition module when detecting that the user photo is not included; the image acquisition module is used for inputting images, and the face recognition module is used for carrying out face detection operation on faces in the input images;
the motion detection module is used for selecting the front n faces occupying the largest picture from the plurality of faces, and carrying out motion detection on the n faces to obtain a motion detection result;
the face recognition module is also used for selecting a single face with the minimum motion detection result as a recognition object;
the user data module is also used for acquiring user identity information and matching the user photos in the database which belongs to the user identity information with the identification objects.
Specifically, the motion detection module is specifically configured to detect an ith personal face, select m feature points, and obtain coordinate movement values of the m feature points in the ith personal face in different frame pictures.
The motion detection result is specifically that the ith personal face is detected, m feature points are selected, the coordinate movement values a of the m feature points in the ith personal face in adjacent frame pictures are summed up, and the total coordinate movement value sigma a in all the adjacent frame pictures is obtained within the preset time.
In particular, the device also comprises a connected domain detection module and a judgment and verification module,
the connected domain detection module is also used for detecting connected domains below a plurality of faces to acquire body images corresponding to the face images;
the judgment and verification module is further used for judging whether the picture occupied by the body image corresponding to the identification object is the largest of all the body images, and if so, the verification is successful.
Further comprises a connected domain detection module and a judgment and verification module,
the connected domain detection module is also used for detecting connected domains below a plurality of faces, acquiring body images corresponding to the face images and extracting depth information of the body images;
the judgment and verification module is further used for judging whether the picture occupied by the body image corresponding to the identification object is the picture with the nearest depth in all the body images, and if so, the verification is successful.
Specifically, the connected component detection module is further configured to perform connected component detection under multiple faces, and specifically, is configured to obtain a depth image of a portion under a face region, calculate an average depth z, and select K pixels under the face region to expand, where the expansion method includes setting a set C to be expanded, placing K pixels into C, calculating a difference between depths of five pixels, i.e., left, lower, right, and right, of each pixel p in C and z, and placing the pixel into C if the difference is smaller than z ± a preset range, deleting p from C after the pixel p is completely calculated corresponding to the five pixels, and simultaneously calculating an average depth d 'of all pixels in C, and updating d to d', so calculating until a stop condition is reached. The stopping condition is that the number of pixels already calculated reaches MAX, or no pixel in C satisfies the condition and can be expanded, and the obtained D is the distance D between the body image and the device.
Specifically, the value of n is determined based on the average queue spacing.
Specifically, the user information detection module is used for acquiring a user ID by reading medical insurance card information.
Further, the face recognition module is further configured to perform matching with the user photo when the user profile includes the user photo, and if a successfully matched face exists, the successfully matched face may be set as a recognition object.
By the method and the scheme, the user ID can be obtained and then matched with the existing database, the user ID can be quickly obtained when the user photo does not exist, the identification object can be found out by preferentially matching when the user photo exists, and the flexibility of the scheme is improved. Meanwhile, the optimal identification configuration can be carried out when the camera module shoots and records images of a plurality of people, and the optimal identification object is determined through motion detection and the occupied picture proportion. Therefore, the matching efficiency of the scheme is increased, and the anti-interference capability under the multi-user environment is enhanced.
Drawings
Fig. 1 is a flowchart of a multi-person scene image verification method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a portrait recognition method according to an embodiment of the present invention;
FIG. 3 is a flow chart of a self-service face service method according to an embodiment of the present invention;
FIG. 4 is a diagram of an apparatus for verifying a multi-user scene image according to an embodiment of the present invention;
FIG. 5 is a diagram of a portrait recognition apparatus according to an embodiment of the present invention;
fig. 6 is a diagram of a self-service face service device according to an embodiment of the present invention.
Detailed Description
To explain technical contents, structural features, and objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
Referring to fig. 1, an image verification method in a multi-person scene includes the following steps,
s100, an input image of an image acquisition module is acquired, and face detection operation is carried out on a face in the input image; the face detection of the face in the input image refers to the preliminary analysis of the input image of the image acquisition module to identify a block which may be a face image of the person in the image. In this context, the face detection means detecting a region that may be a face in an image, and may also be assisted by a rectangular frame to identify the face region, which is generally less in calculation amount and fast in speed. The face identification is to cut out the rectangular frame and compare with the face library after being identified as the face by the face detection, to determine the identity, which has a large amount of calculation
S101, selecting the front n faces occupying the largest picture from a plurality of faces, and carrying out motion detection on the n faces to obtain a motion detection result; the first n faces occupying the largest picture in the plurality of faces are selected, namely area sorting is carried out on a plurality of image blocks recognized as the faces, the area can be pixels, and area correlation judgment is carried out according to the size of the pixel blocks. And selecting the single face with the minimum motion detection result as an identification object. The motion detection here refers to determining whether the position of a block in an image changes, and the criterion for the determination may be whether the block recognized as a human face in an adjacent frame has been translated, deformed, or scaled in size.
Specifically, the motion detection result is a value obtained by detecting the ith personal face, selecting m feature points, and moving coordinates of the m feature points in the ith personal face in different frame pictures. If the sum of the coordinate movement values of the m feature points in different frame pictures is greater than zero or is set to be greater than a certain preset threshold value, the motion is considered to occur. The face movement is judged by designing a plurality of characteristic points in the image, so that more accurate judgment of the movement result can be obtained.
In some more specific embodiments, the motion detection result is specifically a sum a of coordinate movement values of m feature points in the ith human face in adjacent frame pictures, where the m feature points are selected by detecting the ith human face. That is, let the value of the coordinate movement of the mth feature point in the ith face in the adjacent frame be αi,mThen the motion detection result a of the ith facei=∑mαi,m。
In a further embodiment, in order to more accurately reflect the motion state in a relatively long period of time, the motion detection result is further designed to perform a step of designing a preset time, such as 2-4 seconds. If there may be 50-100 frames in the preset time, summing the motion detection result a of the ith face in the preset timeiThe total value Σ a of the coordinate movement in 50-100 adjacent frame picturesi。
In some further embodiments shown in fig. 1, the method further includes a step of S102 performing connected component detection on the lower portions of the plurality of faces to obtain a body image corresponding to the face image. This step is used to establish a corresponding relationship between a plurality of face regions and a plurality of body regions, and certainly, there may be a case where a body region below the face region cannot be identified, which indicates that the body region corresponding to the face image may be blocked, and then the step S103 is continued to determine whether the picture occupied by the body image corresponding to the identification object is the largest of all the body images, and if so, the verification is successful. If there is a corresponding body image below a certain face region, it can be determined that the corresponding body image occupies a screen of 0. By verifying the recognition target selected in the previous steps S100 and S101 through the above-mentioned steps S102 and S103, if the body area of the recognition target is determined to be the largest in all body images, it is reliably verified that the recognition target selected by the above-mentioned method is most likely the face of the person standing at the forefront because the most likely person having the largest body image occupies the screen and thus covers the largest camera angle. Through the verification steps, the recognition accuracy of the face image can be greatly improved, and the practicability of the scheme of the invention is improved.
In some other further embodiments, in order to verify the recognition object selected in the previous steps S100 and S101, a step may be further designed, in which S105 acquires a plurality of faces below to perform connected domain detection, acquires a body image corresponding to the face image, and extracts depth information from the body image;
s106, judging whether the picture occupied by the body image corresponding to the identification object is the picture with the nearest depth in all the body images, and if so, successfully verifying. The depth information here refers to distance information between an object in front of the camera module and the camera module, and the information is obtained by conventional means and the prior art, such as setting the camera as an RGBD depth camera using tof or structured light, and is not described herein again. By recognizing the depth information of the body image region in the above steps S105 and S106, it is possible to achieve a technical effect of verifying whether the recognition target is the face of a person standing at the forefront. The interference of the situations that some people with great concentration are in the back of the team and the head of the people is used for guiding the operation and the like to the recognition result is avoided. Through the verification steps, the recognition accuracy of the face image can be greatly improved, and the practicability of the scheme of the invention is improved.
In some specific embodiments as shown below, the depth calculation of the connected component can be performed by: the method comprises the steps of obtaining a depth image of the lower part of a face area, calculating an average depth z, and selecting K pixel points under the face area for expansion, wherein the expansion method comprises the steps of setting a set C to be expanded, placing K pixels into the C, calculating the difference between the depth of five pixels, namely the left pixel, the lower pixel, the right pixel and the right pixel, of each pixel p in the C and the z, if the difference is smaller than z +/-preset range, placing the pixel into the C, deleting the p from the C after the calculation of the pixel p corresponding to the five pixels is finished, simultaneously calculating the average depth d 'of all the pixels in the C, updating the d to d', and calculating until the stop condition is reached. The stopping condition is that the number of pixels already calculated reaches MAX, or no pixel in C satisfies the condition and can be expanded, and the obtained D is the distance D between the body image and the device.
In order to better distribute computing power, and meanwhile, considering that the number of faces to be processed is increased and the probability of false detection is improved if the value of n is too large, faces which may be really recognized are not covered if the value of n is too small, and the probability of missed detection is improved, the value of n needs to be optimally selected, and is determined according to the average queuing distance. Specifically, the camera shoots the queuing people, intelligently identifies the number of queuing people in the people, and then calculates the queuing length of the team according to the shot queuing images to obtain the average distance which is the queuing length/the number of queuing people. In this example, we set the value of n to be positively correlated with the reciprocal of the average spacing (which can be considered as the queuing density), the positive correlation coefficient can be selected as required, and the value of n can be set to be rounded up. For example, in one embodiment, the number of people in the queue X is 10, the queue length Y is 7.5m, the positive correlation coefficient k is 3, and n ═ k × X/Y ═ 4. It can be seen that under the condition of the queuing density, the selection of the first 4 faces occupying the largest picture among the faces is a setting which meets the calculation requirement and saves the calculation power. Through the scheme, the cameras can record and record the queuing people in real time and calculate the density, so that the determination of the number of the face detection can be better completed. In other embodiments, in addition to the real-time adjustment scheme, the average queue density over the past period may be obtained based on the average queue length and the average number of people over the past period. The calculation of the preferred n value can be performed by selecting the average queuing density of the last week, day and month, and the technical effect of optimizing the value selection of the n value can also be obtained.
In some embodiments as shown in fig. 2, a method for recognizing a person is further performed, where the method includes a step S200 of capturing the number of people in line by a first image capturing unit, where the first image capturing unit is arranged above the people, and an angle between a projection of a central axis direction of the first image capturing unit in a vertical plane and a horizontal plane is less than 45 °, that is, a horizontal field angle of the first image capturing unit is larger than a vertical field angle. Is convenient for capturing queuing people. S202, a second camera shooting unit is arranged in front of the team, shooting is slightly lower than the height of the crowd, and an included angle between the projection of the central axis direction of the second camera shooting unit in the vertical plane and the horizontal plane is larger than 45 degrees, namely the horizontal field angle of the second camera shooting unit is smaller than the vertical field angle, so that face recognition is facilitated. And the first camera unit shoots the queued people, intelligently identifies the number of the queued people in the people, calculates the queuing length of the queue according to the shot queuing images, and obtains the average distance which is the queuing length/the number of the queued people. The value of n is set to be positively correlated with the inverse of the average spacing (which can be considered as the queuing density). After the step S202 is completed, step S101 is performed to select the first n faces occupying the largest picture from the plurality of faces, and the n faces are subjected to motion detection to obtain a motion detection result. Therefore, the design method is connected with the image verification method under the multi-person scene. Through the scheme, the number of the queuing people can be shot and the face detection can be carried out simultaneously in a labor-division mode, the identification accuracy is improved by arranging the special camera, and finally the execution quality of the scheme is improved.
In the embodiment shown in fig. 3, the information acquisition process is further designed, and the scheme is a self-service face recognition service method, which further includes step S1, acquiring a user ID, detecting whether the user ID exists in an owned server, if so, acquiring a user profile related to the user ID, otherwise, creating a new set of profiles. S10 detects whether the user profile includes the user photo, and if not, the execution starts from step S100 until the verification of the identification object is completed, and the identification object is used as the user photo. If the credit user file comprises user photos, an input image of an image acquisition module is acquired in step S100, face detection operation is carried out on faces in the input image, then, the step is preferentially carried out, in step S1001, the first n faces occupying the largest picture in the faces are selected to be matched with the user photos, and if an ith face which is successfully matched exists, the successfully matched face can be set as an identification object. By the scheme, the user ID can be matched with the existing database after being acquired, the user ID can be quickly acquired when the user photo does not exist, the identification object can be found out through preferential matching when the user photo exists, and the flexibility of the scheme is improved.
The present disclosure also introduces an image verification apparatus in a multi-person scene as shown in fig. 4, which can be used to operate the image recognition method in the multi-person scene, the apparatus includes an image acquisition module 400, a face recognition module 402, and a motion detection module 404, where the image acquisition module 400 is configured to input an image, and the face recognition module 402 is configured to perform a face detection operation on a face in the input image; the motion detection module 404 is configured to select the first n faces occupying the largest picture from the multiple faces, and perform motion detection on the n faces to obtain a motion detection result; the face recognition module 402 is further configured to select a single face with the smallest motion detection result as a recognition object.
Specifically, the motion detection module 404 is specifically configured to detect an ith personal face, and select m feature points, where the coordinate of the m feature points in the ith personal face moves in different frames of pictures.
Specifically, the motion detection result is to detect the ith personal face, select m feature points, sum the coordinate movement values a of the m feature points in the ith personal face in the adjacent frame pictures with the total coordinate movement value Σ a in all the adjacent frame pictures within the preset time.
Further, the system further comprises a connected domain detection module 406 and a judgment and verification module 408, wherein the connected domain detection module is further configured to perform connected domain detection on the lower portions of the multiple faces to obtain body images corresponding to the face images. The judgment and verification module is further used for judging whether the picture occupied by the body image corresponding to the identification object is the largest of all the body images, and if so, the verification is successful.
In other further embodiments, the system further includes a connected domain detection module 406 and a judgment and verification module 408, where the connected domain detection module is further configured to perform connected domain detection below multiple faces, obtain a body image corresponding to the face image, and extract depth information from the body image; the judgment and verification module is further used for judging whether the picture occupied by the body image corresponding to the identification object is the picture with the nearest depth in all the body images, and if so, the verification is successful.
Specifically, the connected component detection module is further configured to perform connected component detection under multiple faces, and specifically, is configured to obtain a depth image of a portion under a face region, calculate an average depth z, and select K pixels under the face region to expand, where the expansion method includes setting a set C to be expanded, placing K pixels into C, calculating a difference between depths of five pixels, i.e., left, lower, right, and right, of each pixel p in C and z, and placing the pixel into C if the difference is smaller than z ± a preset range, deleting p from C after the pixel p is completely calculated corresponding to the five pixels, and simultaneously calculating an average depth d 'of all pixels in C, and updating d to d', so calculating until a stop condition is reached. The stopping condition is that the number of pixels already calculated reaches MAX, or no pixel in C satisfies the condition and can be expanded, and the obtained D is the distance D between the body image and the device.
Specifically, the value of n is determined based on the average queue spacing. Specifically, the camera shoots the queuing people, intelligently identifies the number of queuing people in the people, and then calculates the queuing length of the team according to the shot queuing images to obtain the average distance which is the queuing length/the number of queuing people. In this example, we set the value of n to be positively correlated with the reciprocal of the average spacing (which can be considered as the queuing density), the positive correlation coefficient can be selected as required, and the value of n can be set to be rounded up. For example, in one embodiment, the number of people in the queue X is 10, the queue length Y is 7.5m, the positive correlation coefficient k is 3, and n ═ k × X/Y ═ 4. It can be seen that under the condition of the queuing density, the selection of the first 4 faces occupying the largest picture among the faces is a setting which meets the calculation requirement and saves the calculation power. Through the scheme, the cameras can record and record the queuing people in real time and calculate the density, so that the determination of the number of the face detection can be better completed. In other embodiments, in addition to the real-time adjustment scheme, the average queue density over the past period may be obtained based on the average queue length and the average number of people over the past period. The calculation of the preferred n value can be performed by selecting the average queuing density of the last week, day and month, and the technical effect of optimizing the value selection of the n value can also be obtained.
Through the design of the device, the interference to the face recognition system under the condition of multiple persons can be avoided, the anti-interference capability of the scheme of the invention is improved, and the accuracy of face recognition under the condition of multiple persons is further optimized.
In the embodiment shown in fig. 5, a human face recognition apparatus is shown, which is used for executing the human face recognition method, and includes an image acquisition module 400, an image analysis module 401, a human face recognition module 402, a motion detection module 404, and a user data module 405, where the image acquisition module includes a first camera unit and a second camera unit, the first camera unit is used for capturing the number of people in queue, the first camera unit is disposed above the people, an included angle between a projection of a central axis direction of the first camera unit in a vertical plane and a horizontal plane is less than 45 °, the second camera unit is disposed in front of the queue, an included angle between a projection of the central axis direction of the second camera unit in the vertical plane and the horizontal plane is more than 45 °, and the human face recognition module is used for performing human face detection operation on a human face in an input image of the second camera unit; the image analysis module is used for analyzing the video images of the first camera unit to obtain the average queuing density and obtaining the value of n according to the positive correlation principle of n and the average queuing density. The motion detection module is used for selecting the front n faces occupying the largest picture from a plurality of faces and carrying out motion detection on the n faces to obtain a motion detection result, and the face recognition module is also used for selecting a single face with the smallest motion detection result as a recognition object; the user data module is also used for acquiring user identity information and matching the user photos in the database which belongs to the user identity information with the identification objects. Through the design, the image recognition device can divide work to make a photograph of required image through first camera unit, second camera unit, and n people's face's way before the rethread is selected has promoted face recognition's accuracy and interference killing feature under many people's circumstances.
Specifically, the motion detection module is specifically configured to detect an ith personal face, select m feature points, and obtain coordinate movement values of the m feature points in the ith personal face in different frame pictures.
Further, the motion detection result is specifically that the ith personal face is detected, m feature points are selected, and the value a of coordinate movement of the m feature points in the ith personal face in adjacent frame pictures is summed up with the total value Σ a of coordinate movement in all adjacent frame pictures within the preset time.
In a further embodiment, the system further includes a connected domain detection module 406 and a judgment and verification module 408, where the connected domain detection module is further configured to perform connected domain detection on the lower portions of multiple faces to obtain a body image corresponding to the face image. The judgment and verification module is further used for judging whether the picture occupied by the body image corresponding to the identification object is the largest of all the body images, and if so, the verification is successful.
In particular, the system also comprises a connected domain detection module 406 and a judgment and check module 408,
the connected domain detection module is also used for detecting connected domains below a plurality of faces, acquiring body images corresponding to the face images and extracting depth information of the body images;
the judgment and verification module is further used for judging whether the picture occupied by the body image corresponding to the identification object is the picture with the nearest depth in all the body images, and if so, the verification is successful.
In a further embodiment, the connected component detection module is further configured to perform connected component detection under multiple faces, and specifically, is configured to obtain a depth image of a portion under a face region, calculate an average depth z, and select K pixels under the face region to expand, where the expansion method includes setting a set C to be expanded, placing K pixels in the set C, calculating a difference between depths of five pixels, i.e., left, lower, right, and right, of each pixel p in the set C, and if the difference is smaller than z ± preset range, placing the pixel in the set C, deleting p from the set C after the calculation of the pixel p corresponding to the five pixels is completed, and simultaneously calculating an average depth d 'of all pixels in the set C, and updating d to d', so calculating until a stop condition is reached. The stopping condition is that the number of pixels already calculated reaches MAX, or no pixel in C satisfies the condition and can be expanded, and the obtained D is the distance D between the body image and the device.
Specifically, the value of n is determined by selecting an average queue density over the past week, day, or month.
In other embodiments shown in fig. 6, a self-service face recognition service device is further provided, which is used for operating the self-service face recognition service method shown in fig. 3, and includes a user information detection module 407, an image acquisition module 400, a face recognition module 402, a motion detection module 404, and a user data module 405, where the user information detection module is used to acquire a user ID, detect whether the user ID exists in a self-owned server, acquire a user profile related to the user ID, detect whether the user profile includes a user photo, and enable the image acquisition module when detecting that the user photo is not included; the image acquisition module is used for inputting images, and the face recognition module is used for carrying out face detection operation on faces in the input images; the motion detection module is used for selecting the front n faces occupying the largest picture from the plurality of faces, and carrying out motion detection on the n faces to obtain a motion detection result. The face recognition module is also used for selecting the single face with the minimum motion detection result as a recognition object. The user data module is also used for acquiring user identity information and matching the user photos in the database which belongs to the user identity information with the identification objects. Through the design, the scheme can rapidly match the user ID with the user information through the user data module, n faces before selection are used, and the face with the minimum motion result is found out by the motion detection method, so that the accuracy and the anti-interference capability of face identification under the situation of multiple persons are improved.
Further, the motion detection module is specifically configured to detect an ith personal face, select m feature points, and obtain coordinate movement values of the m feature points in the ith personal face in different frame pictures.
The motion detection result is specifically that the ith personal face is detected, m feature points are selected, the coordinate movement values a of the m feature points in the ith personal face in adjacent frame pictures are summed up, and the total coordinate movement value sigma a in all the adjacent frame pictures is obtained within the preset time.
In particular, the system also comprises a connected domain detection module 406 and a judgment and check module 408,
the connected domain detection module 406 is further configured to perform connected domain detection on the lower portions of the multiple faces to obtain body images corresponding to the face images;
the judgment and verification module 408 is further configured to judge whether the picture occupied by the body image corresponding to the identification object is the largest of all the body images, and if yes, the verification is successful.
In particular, the device also comprises a connected domain detection module and a judgment and verification module,
the connected domain detection module is also used for detecting connected domains below a plurality of faces, acquiring body images corresponding to the face images and extracting depth information of the body images;
the judgment and verification module is further used for judging whether the picture occupied by the body image corresponding to the identification object is the picture with the nearest depth in all the body images, and if so, the verification is successful.
Specifically, the connected component detection module is further configured to perform connected component detection under multiple faces, and specifically, is configured to obtain a depth image of a portion under a face region, calculate an average depth z, and select K pixels under the face region to expand, where the expansion method includes setting a set C to be expanded, placing K pixels into C, calculating a difference between depths of five pixels, i.e., left, lower, right, and right, of each pixel p in C and z, and placing the pixel into C if the difference is smaller than z ± a preset range, deleting p from C after the pixel p is completely calculated corresponding to the five pixels, and simultaneously calculating an average depth d 'of all pixels in C, and updating d to d', so calculating until a stop condition is reached. The stopping condition is that the number of pixels already calculated reaches MAX, or no pixel in C satisfies the condition and can be expanded, and the obtained D is the distance D between the body image and the device.
Further, the value of n is determined based on the average queue spacing.
Specifically, the user information detection module 407 is configured to obtain a user ID by reading medical insurance card information.
Further, the face recognition module 408 is further configured to perform matching with the user photo when the user profile includes the user photo, and if a successfully matched face exists, the successfully matched face may be set as a recognition object. By the scheme, the user ID can be matched with the existing database after being acquired, the user ID can be quickly acquired when the user photo does not exist, the identification object can be found out through preferential matching when the user photo exists, and the flexibility of the scheme is improved.
It should be noted that, although the above embodiments have been described herein, the invention is not limited thereto. Therefore, based on the innovative concepts of the present invention, the technical solutions of the present invention can be directly or indirectly applied to other related technical fields by making changes and modifications to the embodiments described herein, or by using equivalent structures or equivalent processes performed in the content of the present specification and the attached drawings, which are included in the scope of the present invention.
Claims (9)
1. A self-service face recognition service device is characterized by comprising a user information detection module, an image acquisition module, a face recognition module, a motion detection module and a user data module, wherein the user information detection module is used for acquiring a user ID (identity), detecting whether the user ID exists in a self-owned server or not, acquiring a user file related to the user ID, detecting whether the user file comprises a user photo or not, and enabling the image acquisition module when detecting that the user photo is not contained; the image acquisition module is used for inputting images, and the face recognition module is used for carrying out face detection operation on faces in the input images;
the motion detection module is used for selecting the front n faces occupying the largest picture from the plurality of faces, and carrying out motion detection on the n faces to obtain a motion detection result;
the face recognition module is also used for selecting a single face with the minimum motion detection result as a recognition object;
the user data module is also used for acquiring user identity information and matching the user photos in the database which belongs to the user identity information with the identification objects.
2. The self-service face recognition service device according to claim 1, wherein the motion detection module is specifically configured to detect an ith personal face, select m feature points, and obtain coordinate movement values of the m feature points in the ith personal face in different frame pictures.
3. The self-service face recognition service device according to claim 2, wherein the motion detection result is specifically that an ith personal face is detected, m feature points are selected, and a total value Σ a of coordinate movement in all adjacent frame pictures is summed up from a value a of coordinate movement in adjacent frame pictures of the m feature points in the ith personal face.
4. The self-service face recognition service device according to claim 1, further comprising a connected domain detection module, a judgment and verification module,
the connected domain detection module is also used for detecting connected domains below a plurality of faces to acquire body images corresponding to the face images;
the judgment and verification module is further used for judging whether the picture occupied by the body image corresponding to the identification object is the largest of all the body images, and if so, the verification is successful.
5. The self-service face recognition service device according to claim 1, further comprising a connected domain detection module, a judgment and verification module,
the connected domain detection module is also used for detecting connected domains below a plurality of faces, acquiring body images corresponding to the face images and extracting depth information of the body images;
the judgment and verification module is further used for judging whether the picture occupied by the body image corresponding to the identification object is the picture with the nearest depth in all the body images, and if so, the verification is successful.
6. The self-service face recognition service device according to one of claims 4 or 5,
the connected domain detection module is also used for detecting connected domains below a plurality of human faces, and is particularly used for obtaining a depth image of the lower part of a human face area, calculating the average depth z, and selecting K pixel points to expand right below the human face area. The stopping condition is that the number of pixels already calculated reaches MAX, or no pixel in C satisfies the condition and can be expanded, and the obtained D is the distance D between the body image and the device.
7. The self-service face recognition service device of claim 1, wherein the value of n is determined according to an average queue spacing.
8. The self-service face recognition service device of claim 1, wherein the user information detection module is configured to obtain a user ID by reading information of a medical insurance card.
9. The self-service face recognition service device of claim 1, wherein the face recognition module is further configured to perform matching with the user's picture when the user's picture is included in the user profile, and if there is a successfully matched face, the successfully matched face may be set as a recognition object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010880942.9A CN111967425B (en) | 2020-08-27 | 2020-08-27 | Self-service face recognition service device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010880942.9A CN111967425B (en) | 2020-08-27 | 2020-08-27 | Self-service face recognition service device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111967425A true CN111967425A (en) | 2020-11-20 |
CN111967425B CN111967425B (en) | 2024-04-26 |
Family
ID=73400860
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010880942.9A Active CN111967425B (en) | 2020-08-27 | 2020-08-27 | Self-service face recognition service device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111967425B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102542254A (en) * | 2010-12-01 | 2012-07-04 | 佳能株式会社 | Image processing apparatus and image processing method |
JP2015088095A (en) * | 2013-11-01 | 2015-05-07 | 株式会社ソニー・コンピュータエンタテインメント | Information processor and information processing method |
CN109447597A (en) * | 2018-12-27 | 2019-03-08 | 深圳市沃特沃德股份有限公司 | More people carry out the method, apparatus and face identification system of attendance jointly |
CN208834377U (en) * | 2018-09-14 | 2019-05-07 | 四川思杰聚典智能科技有限公司 | A kind of self-service queue machine and system with face identification functions |
CN110032966A (en) * | 2019-04-10 | 2019-07-19 | 湖南华杰智通电子科技有限公司 | Human body proximity test method, intelligent Service method and device for intelligent Service |
CN110619300A (en) * | 2019-09-14 | 2019-12-27 | 韶关市启之信息技术有限公司 | Correction method for simultaneous recognition of multiple faces |
-
2020
- 2020-08-27 CN CN202010880942.9A patent/CN111967425B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102542254A (en) * | 2010-12-01 | 2012-07-04 | 佳能株式会社 | Image processing apparatus and image processing method |
JP2015088095A (en) * | 2013-11-01 | 2015-05-07 | 株式会社ソニー・コンピュータエンタテインメント | Information processor and information processing method |
CN208834377U (en) * | 2018-09-14 | 2019-05-07 | 四川思杰聚典智能科技有限公司 | A kind of self-service queue machine and system with face identification functions |
CN109447597A (en) * | 2018-12-27 | 2019-03-08 | 深圳市沃特沃德股份有限公司 | More people carry out the method, apparatus and face identification system of attendance jointly |
CN110032966A (en) * | 2019-04-10 | 2019-07-19 | 湖南华杰智通电子科技有限公司 | Human body proximity test method, intelligent Service method and device for intelligent Service |
CN110619300A (en) * | 2019-09-14 | 2019-12-27 | 韶关市启之信息技术有限公司 | Correction method for simultaneous recognition of multiple faces |
Also Published As
Publication number | Publication date |
---|---|
CN111967425B (en) | 2024-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10990191B2 (en) | Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data | |
CN112001334A (en) | Portrait recognition device | |
US8731249B2 (en) | Face recognition using face tracker classifier data | |
JP6494253B2 (en) | Object detection apparatus, object detection method, image recognition apparatus, and computer program | |
JP4642128B2 (en) | Image processing method, image processing apparatus and system | |
JP6754642B2 (en) | Biodetector | |
CN109858375B (en) | Living body face detection method, terminal and computer readable storage medium | |
KR20160106514A (en) | Method and apparatus for detecting object in moving image and storage medium storing program thereof | |
WO2005116910A2 (en) | Image comparison | |
JPWO2008035411A1 (en) | Mobile object information detection apparatus, mobile object information detection method, and mobile object information detection program | |
JP2004301607A (en) | Moving object detection device, moving object detection method, and moving object detection program | |
CN111967422A (en) | Self-service face recognition service method | |
CN111985424A (en) | Image verification method under multi-person scene | |
KR20140134549A (en) | Apparatus and Method for extracting peak image in continuously photographed image | |
CN111967425A (en) | Self-service face recognition service device | |
CN111985425A (en) | Image verification device under multi-person scene | |
CN112001340A (en) | Portrait identification method | |
CN115019364A (en) | Identity authentication method and device based on face recognition, electronic equipment and medium | |
CN113469135A (en) | Method and device for determining object identity information, storage medium and electronic device | |
US11335123B2 (en) | Live facial recognition system and method | |
CN113850165B (en) | Face recognition method and device | |
JP2002170096A (en) | Passing object count device and count method | |
CN111046788A (en) | Method, device and system for detecting staying personnel | |
CN114724186A (en) | Image sampling and classifying method, system and device | |
CN114463820A (en) | Face positioning method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |