CN115937950A - Multi-angle face data acquisition method, device, equipment and storage medium - Google Patents

Multi-angle face data acquisition method, device, equipment and storage medium Download PDF

Info

Publication number
CN115937950A
CN115937950A CN202211624294.6A CN202211624294A CN115937950A CN 115937950 A CN115937950 A CN 115937950A CN 202211624294 A CN202211624294 A CN 202211624294A CN 115937950 A CN115937950 A CN 115937950A
Authority
CN
China
Prior art keywords
face
orientation
characteristic value
image
side face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211624294.6A
Other languages
Chinese (zh)
Inventor
刘鸣
蔡文静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202211624294.6A priority Critical patent/CN115937950A/en
Publication of CN115937950A publication Critical patent/CN115937950A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure provides a multi-angle face data acquisition method, device, equipment and storage medium, which relates to the technical field of image processing, in particular to the technical fields of computer vision, artificial intelligence, automatic driving, data acquisition, face recognition, etc. The specific implementation scheme is as follows: acquiring a current image acquired by a camera, and determining a first face orientation of a face in the current image; when the first face orientation is the front face orientation, acquiring a face characteristic value of a face in a current image to obtain a front face characteristic value; after the front face characteristic value is obtained, continuously acquiring an image acquired by the camera, and determining a second face orientation of the face in the currently acquired image until the face characteristic value of the face in the image corresponding to the side face orientation with the second face orientation in at least one direction is acquired, so as to obtain at least one side face characteristic value; and determining and storing the target face characteristic value based on the front face characteristic value and at least one side face characteristic value, thereby realizing multi-angle face data acquisition.

Description

Multi-angle face data acquisition method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to the technical fields of computer vision, artificial intelligence, automatic driving, data acquisition, and face recognition, and in particular, to a method, an apparatus, a device, and a storage medium for multi-angle face data acquisition.
Background
With the development of computer image processing technology, face recognition technology is widely applied in various fields. In a face recognition scene, the collected face is compared with a face registered in advance for analysis, so that the collected face is recognized. And face registration, namely extracting characteristic values of face information in a face picture acquired by a camera, and establishing a connection between the extracted characteristic values and the personnel information for acquiring the face picture.
Disclosure of Invention
The disclosure provides a multi-angle face data acquisition method, a device, equipment and a storage medium.
According to an aspect of the present disclosure, a multi-angle face data acquisition method is provided, including:
acquiring a current image acquired by a camera, and determining a first face orientation corresponding to a pose angle of a face in the current image;
under the condition that the first face orientation is a front face orientation, acquiring a face characteristic value of a face in the current image to obtain a front face characteristic value;
under the condition that the front face characteristic value is obtained, images collected by a camera are continuously obtained, second face orientations corresponding to attitude angles of faces in the currently obtained images are determined until the determined second face orientations comprise side face orientations in at least one direction, face characteristic values of the faces in the images corresponding to the side face orientations in the at least one direction are obtained, and at least one side face characteristic value is obtained;
and determining the target face characteristic value based on the front face characteristic value and the at least one side face characteristic value, and storing the target face characteristic value.
According to another aspect of the present disclosure, there is provided a multi-angle face data acquisition apparatus, including:
the first image acquisition module is used for acquiring a current image acquired by a camera and determining a first face orientation corresponding to a face attitude angle in the current image;
a front face feature value obtaining module, configured to obtain a face feature value of a face in the current image to obtain a front face feature value when the first face orientation is the front face orientation;
a side face feature value obtaining module, configured to, under the condition that the front face feature value is obtained, continue to obtain an image acquired by a camera, determine a second face orientation corresponding to a pose angle of a face in the currently-obtained image until each of the determined second face orientations includes a side face orientation in at least one direction, obtain a face feature value of the face in the image corresponding to the side face orientation in the at least one direction, and obtain at least one side face feature value;
and the target characteristic value determining module is used for determining the target face characteristic value based on the front face characteristic value and the at least one side face characteristic value and storing the target face characteristic value.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of any one of the present disclosure.
The embodiment of the disclosure realizes multi-angle face data acquisition.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram of a multi-angle face data acquisition method according to the present disclosure;
FIG. 2 is a schematic illustration of face-oriented region partitioning according to the present disclosure;
FIG. 3 is a schematic diagram of a side face feature value acquisition process according to the present disclosure;
FIG. 4 is a schematic diagram of a multi-angle face registration method according to the present disclosure;
FIG. 5 is a schematic diagram of a multi-angle face data acquisition device according to the present disclosure;
fig. 6 is a block diagram of an electronic device for implementing a multi-angle face data acquisition method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of embodiments of the present disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
With the development of computer image processing technology, face recognition technology is widely applied in various fields. When the face recognition is carried out, the face image of the user is collected through the camera, the face information on the image is detected, the face characteristic value of the user is obtained, and the face characteristic value is compared and analyzed with the face characteristic value which is registered on the face recognition system in advance, so that the face recognition of the user is realized. In order to ensure the accuracy of face recognition, in a related technical scheme, in the process of face registration, a user is guided to face a camera as far as possible through a human-computer interaction interface, under the condition that the face is ensured to be displayed clearly and completely, a face image is collected, a face characteristic value in the face image is extracted, the extracted face characteristic value is linked with the user and stored in a face recognition system, and face registration is completed; when the human face is identified, the user is also required to make the human face right opposite to the camera as much as possible so as to collect the front face characteristic value of the human face, and the collected front face characteristic value is compared with the registered human face characteristic value for analysis so as to realize the identification of the human face.
The face characteristic value corresponding to the camera is just faced by the face collected during face registration, so that when face recognition is carried out, a user needs to keep the face angle close to the face angle at which the face characteristic value is collected during face registration, and the face characteristic value with high similarity to the face characteristic value collected during face registration can be obtained during face recognition, so that a recognition result can be accurately obtained.
However, in the process of performing face recognition by the above technical solution, if the quality of the face registration image is not good or the face is not right opposite to the camera when the user performs face registration, the face feature value stored in the face recognition system is not accurate, that is, the feature information of the face of the user cannot be accurately described, so that the accuracy rate in the subsequent face recognition is obviously reduced. For example, when the user a performs face registration, because the face is not directly facing the camera, the acquired face feature value cannot accurately describe the feature information of the face of the user a, and further when the face recognition system performs face recognition on the user a, the user a may be repeatedly prompted to change the face angle and retry after being directly facing the camera because the registered face feature value cannot be matched, or the user a is mistakenly recognized as the user B. By adopting the technical scheme, the face angle during face recognition is required to be similar to the face angle during face registration as much as possible, the quality of the face recognition image shot by the camera and the image quality during face registration have high restoration degree, the face recognition is easily influenced by factors such as the action matching degree of a user, the image quality shot by the camera and the like, and the recognition effect is unstable.
In order to solve the above problem, the present disclosure provides a multi-angle face data acquisition method, including:
acquiring a current image acquired by a camera, and determining a first face orientation corresponding to a pose angle of a face in the current image;
under the condition that the first face orientation is a front face orientation, acquiring a face characteristic value of the face in the current image to obtain a front face characteristic value;
under the condition that the front face characteristic value is obtained, continuously obtaining an image collected by a camera, determining second face orientations corresponding to pose angles of faces in the currently obtained image until the determined second face orientations comprise side face orientations in at least one direction, obtaining face characteristic values of the faces in the image corresponding to the side face orientations in the at least one direction, wherein the second face orientations are the side face orientations in the at least one direction, and obtaining at least one side face characteristic value;
and determining the target face characteristic value based on the front face characteristic value and the at least one side face characteristic value, and storing the target face characteristic value.
In the embodiment of the disclosure, under the condition that the orientation of a first face corresponding to an acquired current image is a front face orientation, a face characteristic value of the face in the current image is acquired to preferentially obtain the front face characteristic value, under the condition that the front face characteristic value is obtained, further, a face characteristic value corresponding to the orientation of a side face corresponding to at least one direction corresponding to the orientation of a second face in the acquired image is obtained to obtain at least one side face characteristic value, the angle range of face data acquisition is expanded, and then a target face characteristic value fusing the front face characteristic value and the at least one side face characteristic value is obtained based on the front face characteristic value and the at least one side face characteristic value, so that multi-angle face data acquisition is realized, and the integrity of face data acquisition is improved.
The multi-angle face data acquisition method provided by the embodiment of the disclosure can be applied to the scenes of face registration, face data acquisition, face recognition and the like, can be applied to the scenes of face registration, face recognition and the like at a single-camera fixed position and can also be applied to products such as an image delivery baseline software development kit of an adaptive software development project delivery center. The multi-angle face data acquisition method provided by the embodiment of the disclosure can be applied to electronic equipment, such as server equipment, intelligent terminal equipment and the like.
The following describes the multi-angle face data acquisition method provided by the embodiment of the present disclosure in detail.
Referring to fig. 1, a multi-angle face data acquisition method provided by the embodiment of the present disclosure includes the following steps:
and S110, acquiring a current image acquired by the camera, and determining the first face orientation corresponding to the attitude angle of the face in the current image.
In one example, the current image can be obtained by capturing the content under the coverage of the camera through a single camera with a fixed position, such as a vehicle-mounted camera, or any other camera in an application scenario that requires face data capture.
After the current image is obtained, the face detection model can be used for detecting the face, the pose angle of the face and the like contained in the current image, the pose angles of different faces correspond to different face orientations, and after the pose angle of the face is determined, the face orientation corresponding to the pose angle of the face can be determined according to the relationship between the pose angle of the face and the face orientation.
In a possible implementation manner, the determining the first face orientation corresponding to the pose angle of the face in the current image may include the following steps:
step 1: and carrying out face detection on the current image to obtain face information in the current image, wherein the face information comprises a target attitude angle of a face.
And 2, step: and determining a first face orientation corresponding to the target attitude angle based on the corresponding relation between the preset attitude angle and the face orientation, wherein the face orientation comprises a front face orientation and a plurality of side face orientations.
In one example, the face detection model may be used to perform face detection on the current image to obtain a target pose angle of the face in the current image. The face detection model can be obtained by training according to the sample image and the pose angle of the face in the sample image.
In the embodiment of the present disclosure, because different cameras correspond to different image acquisition devices, the corresponding relationships between different attitude angles and human face orientations can be preset for the different cameras, and then, under the condition that the target attitude angle of the human face in the current image is detected, the corresponding relationship can be queried to determine the first human face orientation corresponding to the target attitude angle.
In the embodiment of the disclosure, the target attitude angle of the face in the current image is obtained by performing face detection on the current image, and then the first face orientation corresponding to the target attitude angle is accurately determined according to the corresponding relationship between the preset attitude angle and the face orientation, so that the face characteristic values under different face orientations can be accurately acquired.
In one possible embodiment, the attitude angle may include a pitch angle and a yaw angle, and the corresponding relationship between the preset attitude angle and the face orientation includes: and mapping relations between each preset pitch angle threshold interval and each preset yaw angle threshold interval and each face orientation respectively.
In the embodiment of the disclosure, an image acquired by a camera is divided into a plurality of face orientation regions according to a pitch/yaw coordinate system, and different face orientations correspond to different pitch threshold intervals and yaw threshold intervals.
In one example, because the tracks of the faces rotating in different directions are approximate to the biological features of elliptical tracks, the imaging principles and parameters of different cameras are different, and further, according to the imaging conditions of the cameras and the biological characteristics of the deflection of the faces, a region corresponding to the front face orientation (for example, a region included in an elliptical track) is preferentially determined on a pitch/yaw coordinate system, and then according to the imaging conditions of the edges of the cameras and empirical values (the empirical values may be obtained by acquiring a certain number of test images by using the cameras and then performing region division on the test images in an artificial manner), a pitch angle threshold region and a yaw angle threshold region corresponding to each face orientation are determined, and a plurality of face orientation regions are determined on the pitch/yaw coordinate system. The face orientation area comprises a front face orientation area and a plurality of side face orientation areas, and one face orientation area corresponds to one face orientation.
Exemplarily, as shown in fig. 2, a trajectory of a human face deflected to the periphery is approximately an ellipse, and an ellipse trajectory parameter can be determined according to a type of a camera, a position of the camera, and a distance between the camera and the human face, and a front face facing area is inside the ellipse. After the elliptical track parameters are determined, coordinates of two points A and B on an elliptical circle can be determined according to imaging conditions of the edge of the camera and empirical values, and then an imaging area of the camera is divided into 9 face-oriented areas according to the coordinates of the two points A and B and preset extreme values Maxyaw, min yaw, maxpitch and Min pitch of human face deflection angles. Fig. 2 shows the division of each face orientation region in the case where coordinate values of two points a and B are determined in the pitch/yaw coordinate system, where the middle oval region represents a front face orientation region, and the upper, lower, left, right, upper left, upper right, lower left, and lower right correspond to 8 side face orientation regions in different directions, respectively. Specifically, in the case of determining coordinate values of two points a and B, a symmetric point extending from two points a and B in the pitch/yaw direction is determined on the ellipse to obtain symmetric points (y 1, -p 1), (-y 1, p 1) and (-y 1, -p 1) of the point (y 1, p 1) of the point a and symmetric points (y 2, -p 2), (-y 2, p 2) and (-y 2, -p 2) of the point (y 2, p 2) of the point a, and the ellipse is internally determined as a face-oriented region, an upper face-oriented region is determined according to the coordinates (y 2, p 2) of the point B, the symmetric points (-y 2, p 2) and a preset pitch, an lower face-oriented region is determined according to the symmetric points (y 2, -p 2), (-p 2) and a preset Min pitch, and a lower face-oriented region is determined according to the coordinates (y 1, p 1), the symmetric points (y 1, -p 1) and a preset face-oriented region, and a left face-oriented region is determined according to the coordinates (-y 1, max 2), (-y 2) of the symmetrical points (y 1, p 2), and the preset face-p 2), and a left face-oriented region is determined according to the coordinates (y 1, max 1, p 2) of the preset, max, y 2), and a preset face-y 2, p2, a left face-y 1, and a preset Min pitch determines the left lower face facing region and the right lower face facing region according to the symmetry points (-y 1, -p 1), (-y 2, -p 2) and the preset Min yaw, min pitch. Namely, according to the coordinate values of the points A and B and the values of Maxyaw, min yaw, maxpitch and Min pitch, the pitch angle threshold interval and the yaw angle threshold interval corresponding to the face orientation are determined.
In one example, the determination range of the face orientation region can be changed by adjusting the coordinates of the two points a and B.
Correspondingly, the step 2 of determining the first face orientation corresponding to the target attitude angle based on the corresponding relationship between the preset attitude angle and the face orientation includes:
and respectively matching the target pitch angle and the target yaw angle in the target attitude angle with each preset pitch angle threshold interval and yaw angle threshold interval, and determining the first face orientation corresponding to the target attitude angle.
For example, referring to fig. 2, the target pitch angle and the target yaw angle are corresponding to a coordinate point on the pitch/yaw coordinate system, and the coordinate point is respectively matched with each preset pitch angle threshold interval and yaw angle threshold interval to obtain the first face orientation corresponding to the target attitude angle. Assuming that coordinates corresponding to the pitch/yaw coordinate system of the target pitch angle and the target yaw angle are (a, b), a result obtained after matching a pitch angle threshold interval and a yaw angle threshold interval corresponding to each face orientation area respectively is as follows: a > y1, and-p 1 < b < p1, then the first face orientation corresponding to the target pose angle may be determined to be the left face orientation.
By respectively matching the target pitch angle and the target yaw angle in the target attitude angle with the pitch angle threshold interval and the yaw angle threshold interval corresponding to the preset human face orientation area, the human face orientation area corresponding to the target attitude angle can be more accurately determined.
Referring to fig. 1, in S120, when the first face orientation is the front face orientation, the face feature value of the face in the current image is obtained, so as to obtain the front face feature value.
In one example, the method includes performing face detection on a current image acquired by a camera, determining a first face orientation corresponding to a pose angle of a face in the current image, and preferentially acquiring a face characteristic value of the face in the current image to obtain a front face characteristic value under the condition that the first face orientation is a front face orientation.
For example, after determining the first face orientation corresponding to the attitude angle of the face in the current image, it may be determined whether the first face orientation is a front face orientation, and under the condition that the first face orientation is the front face orientation, the face feature value of the face in the current image is obtained to obtain the front face feature value, and the front face feature value is cached.
And S130, under the condition of obtaining the front face characteristic value, continuously obtaining the image collected by the camera, determining the second face orientation corresponding to the attitude angle of the face in the currently obtained image until each determined second face orientation comprises the side face orientation in at least one direction, obtaining the face characteristic value of the face in the image corresponding to the side face orientation in at least one direction of the second face orientation, and obtaining at least one side face characteristic value.
Under the condition of acquiring the front face characteristic value, continuously acquiring an image acquired by a camera, performing face detection on the currently acquired image to determine a second face orientation corresponding to the attitude angle of the face in the currently acquired image until a second face orientation identical to the side face orientation in at least one direction is acquired, and acquiring the face characteristic value of the image corresponding to the second face orientation to obtain at least one side face characteristic value.
And S140, determining a target face characteristic value based on the front face characteristic value and at least one side face characteristic value, and storing the target face characteristic value.
In one example, the front face feature value and the at least one side face feature value may be subjected to feature fusion, for example, a sum, a concatenation, an average, and the like of the front face feature value and the at least one side face feature value are calculated to obtain a target face feature value, and the target face feature value is stored.
In one possible implementation, a weighted average of the front face feature value and the at least one side face feature value may be determined as a target face feature value, and the target face feature value may be saved.
For example, after a front face feature value and at least one side face feature value are obtained, for example, 1 front face feature value and 4 side face feature values may be weighted and averaged, so as to obtain a target face feature value, where a weight of the front face feature value is 0.4, a weight of each side face feature value is 0.15, and specifically, the weight of the front face feature value and the weight of the side face feature value may be set according to actual needs, which is not limited in this disclosure.
The characteristic value fusion is carried out on the front face characteristic value and at least one side face characteristic value, and the face characteristic values of a plurality of different faces facing downwards can be merged together for storage so as to reduce the occupation of storage space. The target face characteristic value after characteristic value fusion contains face characteristic information description in a larger face orientation area, and meanwhile, the influence on the face recognition effect when the quality of a single image is low during face registration is reduced.
In the embodiment of the disclosure, under the condition that the orientation of the first face corresponding to the acquired current image is the front face orientation, the face characteristic value of the face in the current image is acquired to preferentially acquire the front face characteristic value, under the condition that the front face characteristic value is acquired, further, the face characteristic value corresponding to the orientation of the side face corresponding to the second face in at least one direction in the acquired image is acquired to acquire at least one side face characteristic value, so that the angle range of face data acquisition is expanded, and then, based on the front face characteristic value and the at least one side face characteristic value, the target face characteristic value fusing the front face characteristic value and the at least one side face characteristic value is acquired, so that multi-angle face data acquisition is realized, and the integrity of face data acquisition is improved. And because in the process of collecting the face characteristic value, multi-angle face data collection is carried out, the obtained target face characteristic value contains face characteristic information under a face attitude angle in a wider range, and then during face recognition, a user does not need to be strictly required to face the face to the camera as far as possible, face recognition can be carried out by providing a side face image or facing the side face to the camera, the situation that the face cannot be recognized due to the fact that the face angle during face recognition is different from the face angle during face registration is reduced, the operation actions of adjusting the face angle and carrying out retry by the user are reduced, the face recognition time is further reduced, and the accuracy of face recognition and the stability of the recognition effect are improved.
In a possible embodiment, the method may further include:
and if the first face orientation is not the front face orientation, returning to the execution step: the method comprises the steps of obtaining a current image collected by a camera, and determining a first face orientation corresponding to a posture angle of a face in the current image.
In one example, after determining a first face orientation corresponding to a pose angle of a face in a current image, first determining whether the first face orientation is a front face orientation, and under the condition that the first face orientation is the front face orientation, acquiring a face feature value of the face in the current image to obtain a front face feature value; under the condition that the orientation of the first face is not the front face orientation, prompting a user to face the camera in a human-computer interaction interface, continuously acquiring a current image acquired by the camera, and determining the orientation of the first face corresponding to the attitude angle of the face in the current image until the orientation of the first face is the front face orientation so as to preferentially acquire the front face characteristic value.
The front face characteristic value is preferentially extracted so as to be convenient for collecting side face characteristic values which belong to the same user with the front face characteristic value based on the front face characteristic value better subsequently.
In a possible implementation, as shown in fig. 3, the above-mentioned side face feature value obtaining process may include:
and S310, under the condition of obtaining the face characteristic value, obtaining the image collected by the camera, and determining the orientation of a second face corresponding to the attitude angle of the face in the currently obtained image.
In one example, in the case that the front face feature value is acquired, the user may be guided to convert the face orientation in the human-computer interaction interface, and continue to acquire the image acquired by the camera to acquire the at least one side face feature value. Specifically, in this step, an implementation process of determining a second face orientation corresponding to the pose angle of the face in the currently acquired image may refer to an implementation process of determining a first face orientation corresponding to the pose angle of the face in the current image in step S110, which is not described herein again in this embodiment of the present disclosure.
And S320, determining whether the orientation of the second face corresponds to the orientation of the side face in at least one direction.
And judging whether the orientation of the second face hits any one of the 8 side face orientations, wherein if the orientation of the second face does not correspond to the orientation of the side face in any direction, the second face orientation indicates that the orientation of the face in the currently acquired image is a front face orientation or the image definition is low and no face orientation is recognized, or no face is included in the image, and then executing step S330, otherwise executing step S340.
S330, when the orientation of the second face does not correspond to the orientation of the side face in any direction, returning to the execution step: and acquiring an image acquired by the camera, and determining a second face orientation corresponding to the attitude angle of the face in the currently acquired image.
S340, in a case where the second face orientation corresponds to the side face orientation in at least one direction, determining whether a side face feature value in the side face orientation corresponding to the second face orientation has already been acquired.
And S350, under the condition that the side face characteristic value corresponding to the second face orientation is not obtained, obtaining the face characteristic value of the face in the currently obtained image to obtain the side face characteristic value corresponding to the second face orientation.
S360, when the side face characteristic value under the side face orientation corresponding to the second face orientation is acquired, returning to the execution step: and acquiring an image acquired by the camera, and determining a second face orientation corresponding to the attitude angle of the face in the currently acquired image until a side face characteristic value corresponding to the side face orientation in the preset number direction is obtained.
In one example, in a case that the orientation of the second face corresponds to the orientation of the side face in at least one direction, it is further determined whether a side face feature value with the orientation of the side face downward corresponding to the orientation of the second face has been obtained, and if a side face feature value with the orientation of the side face downward corresponding to the orientation of the second face has been obtained, the user may be guided to convert the orientation of the face in the human-computer interaction interface, and the image acquired by the camera may be re-obtained until a side face feature value corresponding to the orientation of the side face in the preset number of directions is obtained. The preset number can be set according to actual requirements, and illustratively, the preset number direction can be any number of directions in the corresponding directions of the 8 side faces towards the region.
In this embodiment, by determining the second face orientation corresponding to the pose angle of the face in the currently acquired image, and determining whether the second face orientation corresponds to the side face orientation in at least one direction, it is further determined whether the side face characteristic value corresponding to the second face orientation is already acquired, so that repeated extraction of the side face characteristic value corresponding to the same side face orientation can be avoided, and the integrity of face data acquisition is improved.
In a possible implementation manner, the obtaining the face feature value of the face in the currently-obtained image to obtain a side face feature value with a side face facing downward corresponding to the orientation of the second face includes:
acquiring a face characteristic value of a face in a currently acquired image to obtain a candidate side face characteristic value;
calculating a similarity value between the candidate side face characteristic value and the front face characteristic value;
under the condition that the similarity value is larger than a preset threshold value, taking the candidate side face characteristic value as a side face characteristic value under the side face orientation corresponding to the second face orientation and caching the side face characteristic value;
and returning to the execution step under the condition that the similarity value is not greater than the preset threshold value: and acquiring an image acquired by a camera, and determining a second face orientation corresponding to the attitude angle of the face in the currently acquired image.
In the process of acquiring side face characteristic values with different side face orientations, taking the acquired side face characteristic values with the side face orientations as candidate side face characteristic values, further calculating a similarity value between the candidate side face characteristic values and a front face characteristic value, and under the condition that the similarity value is greater than a preset threshold value, indicating that the candidate side face characteristic values and the front face characteristic values correspond to the face characteristic values of the same user, and at the moment, taking the candidate side face characteristic values as the side face characteristic values with the side face orientations corresponding to a second face orientation and caching the side face characteristic values; and under the condition that the similarity value is not greater than the preset threshold value, the candidate side face characteristic value and the front face characteristic value are not corresponding to the face characteristic value of the same user, at the moment, the user can be guided to convert the face orientation in a human-computer interaction interface, the image acquired by the camera is returned to be executed, the second face orientation corresponding to the attitude angle of the face in the currently acquired image is determined, and the side face characteristic value under the side face orientation is acquired again.
The similarity between the candidate side face feature value and the front face feature value may be calculated by calculating a structural similarity or a cosine similarity, and the preset threshold may be set according to a requirement, for example, 60%, 80%, or 90%.
In the embodiment of the present disclosure, the similarity value between the side face characteristic value and the front face characteristic value is calculated to ensure that the front face characteristic value and the side face characteristic value are the face characteristic values of the same user, so that the accuracy of face data acquisition can be improved.
In a possible implementation manner, the face information may further include: a face feature value.
The face information also comprises: the face feature value indicates that, in the process of performing face detection on an image, a face feature value included in the image is detected (that is, the face feature value included in the image has been extracted), and accordingly, when the first face orientation is the front face orientation, the obtaining of the face feature value of the face in the current image may include: and under the condition that the first face orientation is the front face orientation, directly taking the face characteristic value of the face in the current image as the front face characteristic value without detecting or extracting the face characteristic value again.
The obtaining of the face feature value of the face in the image with the second face orientation being the side face orientation in at least one direction to obtain at least one side face feature value may include: and determining the face characteristic value of the face in the image with the second face orientation as the side face orientation in at least one direction as the corresponding side face characteristic value.
In the embodiment of the disclosure, in the process of performing face detection on an image, a face feature value included in the image is detected, and further, under the condition of determining the face orientation, the face feature value corresponding to the face in the image is directly determined as a front face or a side face feature value with the face orientation, and detection or extraction of the face feature value is not required again.
In one possible implementation, the face information further includes: whether there is a face.
In the process of carrying out face detection on the image, whether the image contains a face can be detected, and further, under the condition that the image contains the face, the face orientation corresponding to the attitude angle of the face in the current image is determined, so that face characteristic values under different face orientations are obtained; and under the condition that the image does not contain the face, the image collected by the camera is obtained again.
Illustratively, as shown in fig. 4, fig. 4 is another schematic diagram of a multi-angle face registration method according to the present disclosure, including:
the method comprises the following steps of firstly, obtaining an image collected by a camera;
secondly, carrying out face detection on the image to obtain face information in the image, wherein the face information comprises whether a face exists or not and a target attitude angle of the face;
step three, judging whether a face exists, and returning to execute the step one under the condition that no face exists;
step four, under the condition that the human face is judged to exist, determining a target attitude angle of the human face in the currently acquired image;
determining a target face orientation corresponding to the target attitude angle based on the corresponding relation between the preset attitude angle and the face orientation;
judging whether a face feature value is acquired or not;
step seven, under the condition that the face feature value is not obtained, further judging whether the orientation of the target face is the face orientation, and under the condition that the orientation of the target face is not the face orientation, returning to execute the step one;
step eight, under the condition that the orientation of the target face is judged to be the front face orientation, acquiring a face characteristic value of the face in the currently acquired image to obtain the front face characteristic value, and storing the front face characteristic value;
step nine, under the condition that the front face characteristic value is judged to be acquired, further judging whether the orientation of the target face corresponds to the orientation of the side face in at least one direction; under the condition that the orientation of the target face is judged not to correspond to the orientation of the side face in any direction, returning to execute the first step;
step ten, under the condition that the orientation of the target face is judged to correspond to the orientation of the side face in at least one direction, further judging whether a side face characteristic value with the orientation of the side face corresponding to the orientation of the target face downwards is obtained or not, and under the condition that the side face characteristic value with the orientation of the side face corresponding to the orientation of the target face downwards is judged to be obtained, returning to the step one;
step eleven, under the condition that the side face characteristic value under the side face orientation corresponding to the orientation of the target face is not obtained, obtaining a face characteristic value of the face in the currently obtained image to obtain a candidate side face characteristic value;
step twelve, judging whether the similarity between the candidate side face characteristic value and the front face characteristic value is greater than a preset threshold value or not, and returning to the step one when the similarity between the candidate side face characteristic value and the front face characteristic value is not greater than the preset threshold value;
step thirteen, under the condition that the similarity between the candidate side face characteristic value and the front face characteristic value is judged to be larger than a preset threshold value, taking the candidate side face characteristic value as a side face characteristic value under the side face orientation corresponding to the orientation of the target human face and caching the side face characteristic value;
step fourteen, judging whether the side face characteristic values of the preset number direction with the side face facing downwards are obtained or not, and returning to the step one if not;
fifteen, under the condition that the side face characteristic values with the side faces in the preset number direction facing downwards are obtained, determining the weighted average value of the front face characteristic value and the side face characteristic values with the side faces in the preset number direction facing downwards as a target face characteristic value, and storing the target face characteristic value;
sixthly, establishing an incidence relation between the target face characteristic value and the target user to complete face registration.
In this embodiment, similarity comparison is performed between the side face characteristic values of the preset number direction, where the side faces face downwards, and the front face characteristic value, and when the similarity is greater than a preset threshold, it is determined that the side face characteristic values and the front face characteristic values are the face characteristic values of the same user. The extraction of the side face characteristic values of 8 side face regions can be completed according to the steps, after 8 side face characteristic values and 1 front face characteristic value of the same user are obtained, weighted average can be carried out by using a characteristic value fusion algorithm, and 9 human face characteristic values are combined into a target human face characteristic value. The 9 face feature values may be calculated by completely averaging, or different weights may be set for the 9 face feature values to calculate.
After the target face characteristic value is obtained, the target face characteristic value can be stored in a face recognition system in an encrypted manner, and multi-angle face data acquisition is completed. Under the condition that the target face characteristic value does not occupy extra storage space, the face orientation for carrying out face data acquisition is increased, the angle range for carrying out face data acquisition is further expanded, the face data acquisition in a larger face angle range is realized, the face characteristic information in a larger face range is described, and the integrity of the face data acquisition is improved.
Based on the same inventive concept, a multi-angle face data acquisition device is provided corresponding to the multi-angle face data acquisition method, as shown in fig. 5, the device comprises:
a first image obtaining module 510, configured to obtain a current image acquired by a camera, and determine a first face orientation corresponding to a pose angle of a face in the current image;
a front face feature value obtaining module 520, configured to obtain a face feature value of a face in the current image to obtain a front face feature value when the first face orientation is the front face orientation;
a side face feature value obtaining module 530, configured to, in a case that the front face feature value is obtained, continue to obtain an image acquired by a camera, determine a second face orientation corresponding to a pose angle of a face in the currently obtained image until each determined second face orientation includes a side face orientation in at least one direction, obtain a face feature value of the face in the image corresponding to the side face orientation in the at least one direction, and obtain at least one side face feature value;
a target feature value determining module 540, configured to determine the target face feature value based on the front face feature value and the at least one side face feature value, and store the target face feature value.
In the embodiment of the disclosure, under the condition that the orientation of the first face corresponding to the acquired current image is the front face orientation, the face characteristic value of the face in the current image is acquired to preferentially acquire the front face characteristic value, under the condition that the front face characteristic value is acquired, further, the face characteristic value corresponding to the orientation of the side face corresponding to the second face in at least one direction in the acquired image is acquired to acquire at least one side face characteristic value, so that the angle range of face data acquisition is expanded, and then, based on the front face characteristic value and the at least one side face characteristic value, the target face characteristic value fusing the front face characteristic value and the at least one side face characteristic value is acquired, so that multi-angle face data acquisition is realized, and the integrity of face data acquisition is improved. And because in the process of collecting the face characteristic value, multi-angle face data collection is carried out, the obtained target face characteristic value contains face characteristic information under a face attitude angle in a wider range, and then during face recognition, a user does not need to be strictly required to face the face to the camera as far as possible, face recognition can be carried out by providing a side face image or facing the side face to the camera, the situation that the face cannot be recognized due to the fact that the face angle during face recognition is different from the face angle during face registration is reduced, the operation actions of adjusting the face angle and carrying out retry by the user are reduced, the face recognition time is further reduced, and the accuracy of face recognition and the stability of the recognition effect are improved.
In a possible implementation, the determining the orientation of the first face corresponding to the pose angle of the face in the current image includes:
performing face detection on the current image to obtain face information in the current image, wherein the face information comprises a target attitude angle of a face;
determining a first face orientation corresponding to the target attitude angle based on a corresponding relation between a preset attitude angle and the face orientation; the face orientation comprises: the front face faces and a plurality of directional side faces.
In one possible embodiment, the attitude angle includes: pitch angle and yaw angle; the corresponding relationship between the preset attitude angle and the face orientation comprises: mapping relations between each preset pitch angle threshold interval and each preset yaw angle threshold interval and each face orientation respectively;
determining a first face orientation corresponding to the target attitude angle based on a corresponding relation between preset attitude angles and face orientations, including:
and matching the target pitch angle and the target yaw angle in the target attitude angle with each preset pitch angle threshold interval and yaw angle threshold interval respectively, and determining the first face orientation corresponding to the target attitude angle.
In one possible implementation, the face information further includes: a face feature value.
In a possible embodiment, the above apparatus further comprises:
a second image obtaining module, configured to, if the first face orientation is not the front face orientation, trigger the first image obtaining module 510 to perform: the method comprises the steps of obtaining a current image collected by a camera, and determining a first face orientation corresponding to a posture angle of a face in the current image.
In a possible implementation manner, the side face feature value obtaining module 530 includes:
the first image acquisition unit is used for acquiring an image acquired by a camera under the condition of obtaining the face characteristic value and determining a second face orientation corresponding to the attitude angle of the face in the currently acquired image;
a first determination unit, configured to determine whether the second face orientation corresponds to a side face orientation in at least one direction;
a second image obtaining unit, configured to trigger the first image obtaining unit to perform, when the first determining unit determines that the orientation of the second face does not correspond to the orientation of the side face in any one direction: acquiring an image acquired by a camera, and determining a second face orientation corresponding to the attitude angle of the face in the currently acquired image;
a determination unit configured to determine whether a side face feature value corresponding to the orientation of the second face is acquired or not, in a case where the first determination unit determines that the orientation of the second face corresponds to the orientation of the side face in at least one direction;
a side face feature value obtaining unit, configured to obtain a face feature value of a face in a currently obtained image and obtain a side face feature value of a side face corresponding to the second face orientation when the determining unit determines that the side face feature value of the side face corresponding to the second face orientation is not obtained;
a third image acquisition unit configured to, in a case where the determination unit determines that the side face feature value in the side face orientation corresponding to the second face orientation has not been acquired, trigger the first image acquisition unit to perform: and acquiring an image acquired by the camera, and determining a second face orientation corresponding to the attitude angle of the face in the currently acquired image until a side face characteristic value corresponding to the side face orientation in the preset number direction is obtained.
In a possible implementation manner, the obtaining the face feature value of the face in the currently-acquired image to obtain a side face feature value with a side face facing downward corresponding to the second face facing direction includes:
acquiring a face characteristic value of a face in a currently acquired image to obtain a candidate side face characteristic value;
calculating a similarity value between the candidate side face characteristic value and the front face characteristic value;
under the condition that the similarity value is larger than a preset threshold value, taking the candidate side face characteristic value as a side face characteristic value under the side face orientation corresponding to the second face orientation and caching the side face characteristic value;
and returning to the execution step under the condition that the similarity value is not greater than the preset threshold value: and acquiring an image acquired by the camera, and determining a second face orientation corresponding to the attitude angle of the face in the currently acquired image.
In a possible implementation manner, the target feature value determining module is specifically configured to:
and determining the weighted average value of the front face characteristic value and the at least one side face characteristic value as the target face characteristic value, and storing the target face characteristic value.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
The present disclosure provides an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of any one of the present disclosure.
The present disclosure provides a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of the present disclosure.
A computer program product comprising a computer program that when executed by a processor implements the method of any one of the present disclosure.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
It should be noted that the head model in this embodiment is not a head model for a specific user, and cannot reflect personal information of a specific user.
It should be noted that the two-dimensional face image in the present embodiment is from a public data set.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the various methods and processes described above, such as the multi-angle face data acquisition method. For example, in some embodiments, the multi-angle face data acquisition method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 600 via ROM 602 and/or communications unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the multi-angle face data acquisition method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the multi-angle face data acquisition method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., having a graphical user interface or a web browser)
The user computer through which a user can interact with an implementation of the systems and techniques described here 5 via the graphical user interface or the web browser), or a computing system that includes any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other 0 and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that steps may be reordered, added, or removed 5 using various forms of the flow shown above. For example, the steps described in the present disclosure may be executed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations of 0, and substitutions may be made depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (19)

1. A multi-angle face data acquisition method comprises the following steps:
acquiring a current image acquired by a camera, and determining a first face orientation corresponding to a pose angle of a face in the current image;
under the condition that the first face orientation is a front face orientation, acquiring a face characteristic value of a face in the current image to obtain a front face characteristic value;
under the condition that the front face characteristic value is obtained, continuously obtaining an image collected by a camera, determining second face orientations corresponding to pose angles of faces in the currently obtained image until the determined second face orientations comprise side face orientations in at least one direction, obtaining face characteristic values of the faces in the image corresponding to the side face orientations in the at least one direction, wherein the second face orientations are the side face orientations in the at least one direction, and obtaining at least one side face characteristic value;
and determining the target face characteristic value based on the front face characteristic value and the at least one side face characteristic value, and storing the target face characteristic value.
2. The method of claim 1, wherein the determining a first face orientation corresponding to a pose angle of the face in the current image comprises:
performing face detection on the current image to obtain face information in the current image, wherein the face information comprises a target attitude angle of a face;
determining a first face orientation corresponding to the target attitude angle based on a corresponding relation between a preset attitude angle and a face orientation; the face orientation includes: the front face faces and a plurality of directional side faces.
3. The method of claim 2, wherein the attitude angle comprises: pitch angle and yaw angle; the corresponding relationship between the preset attitude angle and the face orientation comprises: mapping relations between each preset pitch angle threshold interval and each preset yaw angle threshold interval and each face orientation respectively;
determining a first face orientation corresponding to the target attitude angle based on a corresponding relation between preset attitude angles and face orientations, including:
and matching the target pitch angle and the target yaw angle in the target attitude angle with each preset pitch angle threshold interval and yaw angle threshold interval respectively, and determining the first face orientation corresponding to the target attitude angle.
4. The method of claim 2, wherein the face information further comprises: a face feature value.
5. The method of claim 1, further comprising:
and if the first face orientation is not the front face orientation, returning to the execution step: the method comprises the steps of obtaining a current image collected by a camera, and determining a first face orientation corresponding to a posture angle of a face in the current image.
6. The method according to claim 1, wherein, in a case that the front face feature value is obtained, the obtaining of the image acquired by the camera is continued, and the orientation of the second face corresponding to the pose angle of the face in the currently obtained image is determined until each of the determined orientations of the second face includes at least one direction of side face orientation, and obtaining the face feature value of the face in the image corresponding to the side face orientation of the second face in the at least one direction, so as to obtain at least one side face feature value, includes:
under the condition that the face characteristic value is obtained, acquiring an image acquired by a camera, and determining a second face orientation corresponding to the attitude angle of the face in the currently acquired image;
determining whether the second face orientation corresponds to a side face orientation in at least one direction;
and returning to the execution step when the orientation of the second face does not correspond to the orientation of the side face in any direction: acquiring an image acquired by a camera, and determining a second face orientation corresponding to the attitude angle of the face in the currently acquired image;
under the condition that the orientation of the second face corresponds to the orientation of the side face in at least one direction, judging whether a side face characteristic value under the orientation of the side face corresponding to the orientation of the second face is acquired;
under the condition that a side face characteristic value corresponding to the second face orientation is not obtained, obtaining a face characteristic value of a face in a currently obtained image, and obtaining a side face characteristic value corresponding to the second face orientation and having a side face orientation;
and returning to the execution step when the side face characteristic value of the side face facing downwards corresponding to the second face facing is acquired: and acquiring an image acquired by the camera, and determining a second face orientation corresponding to the attitude angle of the face in the currently acquired image until a side face characteristic value corresponding to the side face orientation in the preset number direction is obtained.
7. The method of claim 6, wherein the obtaining face feature values of a face in a currently-obtained image to obtain side face feature values with a side face facing downward corresponding to the second face orientation comprises:
acquiring a face characteristic value of a face in a currently acquired image to obtain a candidate side face characteristic value;
calculating a similarity value between the candidate side face characteristic value and the front face characteristic value;
under the condition that the similarity value is larger than a preset threshold value, taking the candidate side face characteristic value as a side face characteristic value under the side face orientation corresponding to the second face orientation and caching the side face characteristic value;
and returning to the execution step under the condition that the similarity value is not greater than the preset threshold value: and acquiring an image acquired by the camera, and determining a second face orientation corresponding to the attitude angle of the face in the currently acquired image.
8. The method according to any one of claims 1 to 7, wherein the determining the target face feature value based on the front face feature value and the at least one side face feature value and saving the target face feature value comprises:
and determining the weighted average value of the front face characteristic value and the at least one side face characteristic value as the target face characteristic value, and storing the target face characteristic value.
9. A multi-angle face data acquisition device, comprising:
the first image acquisition module is used for acquiring a current image acquired by a camera and determining a first face orientation corresponding to a face attitude angle in the current image;
a front face feature value obtaining module, configured to obtain a face feature value of a face in the current image to obtain a front face feature value when the first face orientation is the front face orientation;
a side face feature value obtaining module, configured to, under the condition that the front face feature value is obtained, continue to obtain an image acquired by a camera, determine a second face orientation corresponding to a pose angle of a face in the currently-obtained image until each of the determined second face orientations includes a side face orientation in at least one direction, obtain a face feature value of the face in the image corresponding to the side face orientation in the at least one direction, and obtain at least one side face feature value;
and the target characteristic value determining module is used for determining the target face characteristic value based on the front face characteristic value and the at least one side face characteristic value and storing the target face characteristic value.
10. The apparatus of claim 9, wherein the determining a first face orientation corresponding to a pose angle of the face in the current image comprises:
performing face detection on the current image to obtain face information in the current image, wherein the face information comprises a target attitude angle of a face;
determining a first face orientation corresponding to the target attitude angle based on a corresponding relation between a preset attitude angle and the face orientation; the face orientation includes: the front face faces and a plurality of directional side faces.
11. The apparatus of claim 10, wherein the attitude angle comprises: pitch angle and yaw angle; the corresponding relationship between the preset attitude angle and the face orientation comprises: mapping relations between each preset pitch angle threshold interval and each preset yaw angle threshold interval and each face orientation respectively;
determining a first face orientation corresponding to the target attitude angle based on a corresponding relation between preset attitude angles and face orientations, including:
and respectively matching a target pitch angle and a target yaw angle in the target attitude angle with each preset pitch angle threshold interval and yaw angle threshold interval, and determining the first face orientation corresponding to the target attitude angle.
12. The apparatus of claim 10, wherein the face information further comprises: a face feature value.
13. The apparatus of claim 9, further comprising:
a second image obtaining module, configured to, when the first face orientation is not the front face orientation, trigger the first image obtaining module to perform: the method comprises the steps of obtaining a current image collected by a camera, and determining a first face orientation corresponding to a posture angle of a face in the current image.
14. The apparatus according to claim 9, wherein the side face feature value obtaining module includes:
the first image acquisition unit is used for acquiring an image acquired by a camera under the condition of obtaining the face characteristic value and determining a second face orientation corresponding to the attitude angle of the face in the currently acquired image;
a first determination unit, configured to determine whether the second face orientation corresponds to a side face orientation in at least one direction;
a second image obtaining unit, configured to trigger the first image obtaining unit to perform, when the first determining unit determines that the orientation of the second face does not correspond to the orientation of the side face in any one direction: acquiring an image acquired by a camera, and determining a second face orientation corresponding to the attitude angle of the face in the currently acquired image;
a determination unit configured to determine whether a side face feature value corresponding to the orientation of the second face is acquired or not, in a case where the first determination unit determines that the orientation of the second face corresponds to the orientation of the side face in at least one direction;
a side face feature value obtaining unit, configured to obtain a face feature value of a face in a currently obtained image and obtain a side face feature value of a side face corresponding to the second face orientation when the determining unit determines that the side face feature value of the side face corresponding to the second face orientation is not obtained;
a third image acquisition unit configured to, in a case where the determination unit determines that the side face feature value in the side face down orientation corresponding to the second face orientation has been acquired, trigger the first image acquisition unit to perform: and acquiring an image acquired by the camera, and determining a second face orientation corresponding to the attitude angle of the face in the currently acquired image until a side face characteristic value corresponding to the side face orientation in the preset number direction is obtained.
15. The apparatus of claim 14, wherein the obtaining of the face feature value of the face in the currently acquired image to obtain the side face feature value with the side face facing down corresponding to the second face orientation comprises:
acquiring a face characteristic value of a face in a currently acquired image to obtain a candidate side face characteristic value;
calculating a similarity value between the candidate side face characteristic value and the front face characteristic value;
under the condition that the similarity value is larger than a preset threshold value, taking the candidate side face characteristic value as a side face characteristic value under the side face orientation corresponding to the second face orientation and caching the side face characteristic value;
and returning to the execution step under the condition that the similarity value is not greater than the preset threshold value: and acquiring an image acquired by a camera, and determining a second face orientation corresponding to the attitude angle of the face in the currently acquired image.
16. The apparatus according to any one of claims 9 to 14, wherein the target feature value determining module is specifically configured to:
and determining the weighted average value of the front face characteristic value and the at least one side face characteristic value as the target face characteristic value, and storing the target face characteristic value.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-8.
CN202211624294.6A 2022-12-16 2022-12-16 Multi-angle face data acquisition method, device, equipment and storage medium Pending CN115937950A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211624294.6A CN115937950A (en) 2022-12-16 2022-12-16 Multi-angle face data acquisition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211624294.6A CN115937950A (en) 2022-12-16 2022-12-16 Multi-angle face data acquisition method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115937950A true CN115937950A (en) 2023-04-07

Family

ID=86655636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211624294.6A Pending CN115937950A (en) 2022-12-16 2022-12-16 Multi-angle face data acquisition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115937950A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117454351A (en) * 2023-12-20 2024-01-26 福建票付通信息科技有限公司 Face characteristic value synchronization method and identity verification system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117454351A (en) * 2023-12-20 2024-01-26 福建票付通信息科技有限公司 Face characteristic value synchronization method and identity verification system
CN117454351B (en) * 2023-12-20 2024-05-31 福建票付通信息科技有限公司 Face characteristic value synchronization method and identity verification system

Similar Documents

Publication Publication Date Title
CN109934065B (en) Method and device for gesture recognition
CN112785625B (en) Target tracking method, device, electronic equipment and storage medium
CN112597837B (en) Image detection method, apparatus, device, storage medium, and computer program product
CN114186632B (en) Method, device, equipment and storage medium for training key point detection model
CN113971751A (en) Training feature extraction model, and method and device for detecting similar images
CN113362314B (en) Medical image recognition method, recognition model training method and device
CN113326773A (en) Recognition model training method, recognition method, device, equipment and storage medium
CN113205041A (en) Structured information extraction method, device, equipment and storage medium
CN114169425B (en) Training target tracking model and target tracking method and device
CN111950345A (en) Camera identification method and device, electronic equipment and storage medium
CN115937950A (en) Multi-angle face data acquisition method, device, equipment and storage medium
CN115147809A (en) Obstacle detection method, device, equipment and storage medium
CN115273184B (en) Training method and device for human face living body detection model
CN112991451B (en) Image recognition method, related device and computer program product
CN115019057A (en) Image feature extraction model determining method and device and image identification method and device
CN114549584A (en) Information processing method and device, electronic equipment and storage medium
CN113936158A (en) Label matching method and device
CN114119990A (en) Method, apparatus and computer program product for image feature point matching
CN113313125A (en) Image processing method and device, electronic equipment and computer readable medium
CN113361455A (en) Training method of face counterfeit identification model, related device and computer program product
CN110969210A (en) Small and slow target identification and classification method, device, equipment and storage medium
CN114092739B (en) Image processing method, apparatus, device, storage medium, and program product
CN113705620B (en) Training method and device for image display model, electronic equipment and storage medium
CN114037865B (en) Image processing method, apparatus, device, storage medium, and program product
CN115205939B (en) Training method and device for human face living body detection model, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination