CN116453185A - Examination room-based identity recognition method and device, electronic equipment and storage medium - Google Patents

Examination room-based identity recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116453185A
CN116453185A CN202310384409.7A CN202310384409A CN116453185A CN 116453185 A CN116453185 A CN 116453185A CN 202310384409 A CN202310384409 A CN 202310384409A CN 116453185 A CN116453185 A CN 116453185A
Authority
CN
China
Prior art keywords
target object
fisheye image
fisheye
image
examination room
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310384409.7A
Other languages
Chinese (zh)
Inventor
陈浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Yikangsi Technology Co ltd
Original Assignee
Hubei Yikangsi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Yikangsi Technology Co ltd filed Critical Hubei Yikangsi Technology Co ltd
Priority to CN202310384409.7A priority Critical patent/CN116453185A/en
Publication of CN116453185A publication Critical patent/CN116453185A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/243Aligning, centring, orientation detection or correction of the image by compensating for image skew or non-uniform image deformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses an identity recognition method, an identity recognition device, electronic equipment and a storage medium based on an examination room, wherein the method comprises the following steps: acquiring a first fisheye image shot by a fisheye camera in an examination room; performing target detection on the first fisheye image to obtain face information of a target object; if the face information is matched with the target object, acquiring a second fisheye image shot by the fisheye camera in the examination room; processing the second fisheye image to obtain identity information of the target object; and carrying out identity recognition on the target object according to the identity information so as to determine whether the identity information is matched with the target object, thereby greatly improving the accuracy of identity verification of examinees, ensuring fairness and fairness of examination and realizing the real purpose of examination.

Description

Examination room-based identity recognition method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an identification method and apparatus based on an examination room, an electronic device, and a storage medium.
Background
In recent years, along with the progress of the times and the development of scientific technology, the demands of society on personnel in various aspects are also higher and higher, so that the difficulty of various examinations is also higher and higher, and the pressure of examinees is also increased. Tilapia phenomena, which endanger public interests and affect the authenticity and effectiveness of the examination, occur in many examinations, which is more frequent in college examinations.
Therefore, how to strengthen the accuracy of identity verification, ensure the fairness and fairness of the test, and realize the real purpose of the test has become a technical problem to be solved urgently at present.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an identity recognition method, an identity recognition device, electronic equipment and a storage medium based on an examination room, which greatly improve the accuracy of identity verification of the examinee and realize fairness and fairness of examination.
In order to solve the above problems, in a first aspect, an embodiment of the present invention provides an identification method based on an examination room, which includes:
acquiring a first fisheye image shot by a fisheye camera in an examination room;
performing target detection on the first fisheye image to obtain face information of a target object;
if the face information is matched with the target object, acquiring a second fisheye image shot by the fisheye camera in the examination room;
Processing the second fisheye image to obtain identity information of the target object;
and carrying out identity recognition on the target object according to the identity information so as to determine whether the identity information is matched with the target object.
In a second aspect, an embodiment of the present invention further provides an identity recognition device based on an examination room, including:
the first acquisition unit is used for acquiring a first fisheye image shot by the fisheye camera in the examination room;
the first detection unit is used for carrying out target detection on the first fisheye image to obtain face information of a target object;
the second acquisition unit is used for acquiring a second fisheye image shot by the fisheye camera in the examination room if the face information is matched with the target object;
the first image processing unit is used for processing the second fisheye image to obtain the identity information of the target object;
and the first identification unit is used for carrying out identity identification on the target object according to the identity information so as to determine whether the identity information is matched with the target object.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the examination room-based identification method according to the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, where the computer program when executed by a processor causes the processor to perform the examination room-based identification method described in the first aspect above.
The embodiment of the invention provides an identity recognition method, an identity recognition device, electronic equipment and a storage medium based on an examination room, wherein the method is characterized in that a first fisheye image shot by a fisheye camera in the examination room is obtained, and target detection is carried out on the first fisheye image to obtain face information of a target object; if the face information is matched with the target object, a second fisheye image shot by the fisheye camera in the examination room is obtained, the second fisheye image is processed to obtain the identity information of the target object, and the identity information is further used for identifying the target object so as to determine whether the identity information is matched with the target object, so that the accuracy of identity verification of the examinee is greatly improved, and the fairness of the examination are realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an identification method based on examination rooms according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of an identification method based on examination rooms according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of an identification method based on examination rooms according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of an identification method based on examination rooms according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart of an identification method based on examination rooms according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of another flow chart of an identification method based on examination rooms according to an embodiment of the present invention;
FIG. 7 is a schematic flow chart of an identification method based on examination rooms according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of an identification device based on examination rooms according to an embodiment of the present invention;
fig. 9 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Referring to fig. 1, fig. 1 is a flowchart illustrating an identification method based on an examination room according to an embodiment of the invention. The examination room-based identity recognition method of the embodiment of the invention is applied to the terminal equipment and is executed through application software installed in the terminal equipment. The terminal equipment can be a desktop computer, a notebook computer, a tablet personal computer, a mobile phone and the like.
It should be noted that, the application scenario described in the embodiment of the present application is for more clearly describing the technical solution of the embodiment of the present application, and does not constitute a limitation on the technical solution provided in the embodiment of the present application, and as a person of ordinary skill in the art can know, with the appearance of the new application scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
The examination room-based identification method is described in detail below.
As shown in fig. 1, the method includes the following steps S110 to S150.
S110, acquiring a first fisheye image shot by the fisheye camera in the examination room.
Specifically, the first fisheye image comprises an image of a face of a target object, the target object is an examinee in the examination room, and identity recognition of the examinee in the examination room can be achieved by acquiring the first fisheye image shot by the fisheye camera in the examination room and acquiring the face of the target object from the first fisheye image.
And S120, performing target detection on the first fisheye image to obtain face information of a target object.
In this embodiment, when the first fisheye image is detected, the face information of the target object in the first fisheye image may be obtained by using a deep learning technology, so that identity recognition of the examinee in the examination room may be achieved.
In other embodiments of the invention, as shown in fig. 2, step S120 includes steps S121 and S122.
S121, carrying out distortion correction on the first fisheye image according to a preset fisheye image correction model to obtain a corrected first fisheye image;
s122, performing target detection on the corrected first fisheye image to obtain face information of a target object.
In this embodiment, due to the characteristics of the imaging principle of the fisheye camera, the first fisheye image shot by the fisheye camera in the examination room has larger image distortion, so that before the first fisheye image is subjected to target detection, the first fisheye image needs to be subjected to distortion correction, and further the face information of the target object in the first fisheye image can be accurately detected.
In other embodiments of the invention, as shown in fig. 3, steps S210, S220, and S230 are further included before step S120.
S210, acquiring a pre-trained generated countermeasure network; wherein the generated countermeasure network includes a discriminator and a generator;
s220, constructing the fisheye image correction model according to the generator; wherein the generator is used as a teacher model, and the fisheye image correction model is used as a student model;
And S230, performing knowledge distillation on the fisheye image correction model according to the generator to obtain a distilled fisheye image correction model.
In this embodiment, the generated antagonism network is obtained in advance through image sample training and is used for correcting the distorted image. Wherein the pre-trained generated countermeasure network includes a discriminant and a generator that can be used to correct the distorted image. However, the parameter amount of the generator is large, resulting in a large operation amount of the device. Therefore, in order to increase the operation speed of the device, after the pre-trained generation type countermeasure network is acquired, a student model of the fish-eye correction model is constructed by using a generator in the generation type countermeasure network as a teacher model.
The teacher model is a single complex network or a set of a plurality of networks and has good performance and generalization capability, the student model is a model with small network scale and limited expression capability, and the teacher model has strong learning capability and can transfer learned knowledge to a student model with relatively weak learning capability, so that the generalization capability of the student model is enhanced. According to the method and the device, training of the student model is assisted by utilizing the teacher model, so that the student model has the performance equivalent to that of the teacher model, but the parameter quantity is greatly reduced, and model compression and acceleration are achieved.
In the process of constructing the fisheye image correction model by adopting the generator, the fisheye image correction model can be constructed by pruning, parameter sharing and other modes of the generator, and the fisheye image correction model is assisted to train by adopting the generator, so that the knowledge learned by the generator is migrated into the fisheye image correction model, and the distilled fisheye image correction model has the same image correction function as the generator.
In some embodiments, a generator is used to construct a fisheye image correction model, which specifically comprises the following steps: performing network parameter pruning on the generator to obtain an intermediate fisheye image correction model; performing knowledge distillation on the middle fish-eye image correction model to obtain a distilled middle fish-eye image correction model; and (3) performing network parameter pruning on the distilled middle fisheye image correction model to obtain a final fisheye image correction model.
In this embodiment, the middle fisheye image correction model is also a student model of the generator, but the network parameters of the middle fisheye image correction model are still relatively large, so after knowledge distillation is performed on the middle fisheye image correction model, at least one parameter pruning needs to be performed on the middle fisheye image correction model until the parameter amount of the final fisheye image correction model reaches a minimum.
In the process of performing network parameter pruning on the distilled middle fisheye image correction model, image correction models with different parameter numbers can be obtained, and then performing distillation and pruning again until the parameter amount of the finally pruned fisheye image correction model reaches the minimum.
Before parameter pruning is performed on the generator, a network structure of the generator needs to be separately constructed, and weights of the generator are loaded, so that the generator can be stripped from the generation type countermeasure network. Wherein the number of parameter trimmings of the generator is related to the minimum functional unit of the generator. Meanwhile, when parameter pruning is performed on the generator, pruning may be performed in units of base units in the generator. For example, when the network structure of the generator is Resnet, pruning may be performed in units of ResBlock.
In other embodiments, the middle fisheye image correction model M' g After training, a preset test sample set can be used for correcting the model M 'for middle fisheye image' g Testing to calculate middle fish eye image correction model M 'after training' g First accuracy in a test sample set At the same time adopt the sample set pair generator M g Testing to calculate generator M g Second in the test sample setAccuracy->Then calculate the first accuracy +.>And second accuracy->If the difference value delta acc is larger than the preset first threshold value Thr, the middle fisheye image correction model can be directly used as the final fisheye image correction model. Wherein,,the above judgment formula is:
when s=1, the trimming and training can be stopped, and the middle fisheye image model after the last trimming is taken as the final fisheye image correction model.
In some embodiments, the specific step of knowledge-distilling the fisheye image correction model with the generator may include: acquiring a first predicted value output by a generator and a second predicted value output by a fisheye image correction model; and determining a fisheye image correction model after distillation according to the first predicted value and the second predicted value.
In this embodiment, when the generator is used to perform knowledge distillation on the fisheye image correction model, a first image sample set formed by the training image and the distorted training image may be specifically input into the generator to perform training on the fisheye image model, and meanwhile, a second predicted value is output, and a corresponding predicted value, that is, a first predicted value, is obtained from the generator, and then, whether the fisheye image correction model performs knowledge distillation is determined through a deviation between the first predicted value and the second predicted value.
Specifically, a first predicted value and a second predicted value at the time t are obtained, deviation of the predicted values is generated according to the first predicted value and the second predicted value, whether the deviation is larger than a set second threshold value is judged, and if the deviation is larger than the set second threshold value, a fisheye image correction model at the time t-1 is taken as a fisheye image correction model after final knowledge distillation.
In other embodiments of the invention, as shown in fig. 4, steps S310 and S320 are further included before S210.
S310, processing a training image by adopting internal parameters and distortion coefficients of a fisheye camera to obtain a distorted training image;
s320, training the generated type countermeasure network according to the training image and the distorted training image to obtain the trained generated type countermeasure network.
Specifically, the training image is a high-resolution image which is not distorted, the distorted training image can be obtained by converting a distortion mapping relation formed by internal parameters and distortion coefficients of the fisheye camera, the training image and the distorted training image are used as a first image sample set of a training generation type countermeasure network, the distorted training image is used as an input of the generation type countermeasure network, and the training image is used as a label of the training generation type countermeasure network.
The internal parameters and the distortion coefficients of the fisheye camera can be obtained by adopting a chessboard calibration method for the fisheye camera, specifically, the fisheye camera can be adopted to shoot a chessboard for calibration from a plurality of angles and positions, and the internal parameters and the distortion coefficients are calculated by using a fisheye calibration algorithm; the matrix K of internal parameters may be:
the vector D of distortion coefficients may be:
D=(k 1 ,k 2 ,k 3 ,k 4 )
wherein f x 、f y C is a parameter of focal length x 、c y Is an imageLongitudinal and transverse offset, k, of origin relative to optical center imaging point 1 、k 2 、k 3 、k 4 Is the radial and lateral distortion coefficient of the camera.
In some embodiments, the specific process of generating the distorted training image includes:
calculating a corrected camera internal reference matrix R by adopting K, D; wherein r=f e (K,D);
Decomposing the camera internal reference matrix R by using singular values to obtain an inverse matrix iR of R; wherein iR = SVD (R);
converting the two-dimensional coordinates (u, v) of the training image into a camera coordinate system (x, y, z) according to the inverse matrix iR; wherein (x, y, z) = (u, v, 1) iR;
normalization in the z-axis, i.e.
Calculating the radius r of the cross section of the fish eye hemisphere; wherein,,
calculating an incidence angle theta between the light and the optical axis; wherein θ=atan (r);
correcting the incidence angle theta to obtain a corrected incidence angle theta d The method comprises the steps of carrying out a first treatment on the surface of the Wherein θ d =θ(1+k 1 θ 2 +k 2 θ 4 +k 3 θ 6 +k 4 θ 8 );
Generating corrected camera coordinate system coordinates (x ', y') according to the corrected incident angle; wherein,,
converting the camera coordinate system into a pixel coordinate system (u ', v'), namely (u ', v') being the two-dimensional coordinates of the distorted training image; wherein u' =f x x′+c x ,v′=f y y′+c y
In some embodiments, the specific process of training the generated countermeasure network using the training image and the distorted training image includes: inputting the distorted training image into a generator to obtain a pseudo-undistorted image; constructing a second image sample set according to the training image and the pseudo-undistorted image; training the discriminator according to the second image sample set to obtain a trained discriminator; training the generator according to the distorted training image to obtain a trained generator.
In this embodiment, the pseudo-undistorted image is obtained by correcting the distorted training image through the generator, and the second image sample set is composed of the training image and the pseudo-undistorted image and is used for training the discriminator, so that the discriminator can better distinguish the pseudo-image and the real image.
In addition, after the discriminator finishes training, the discriminator is fixed, and the distorted training image is input into the generator to train the generator, so that the image generated by the generator can be indistinguishable into a pseudo image by the discriminator, and the generator after the training is finished can be obtained.
In other embodiments of the invention, as shown in fig. 5, step S120 includes steps S1201 and S1202.
S1201, determining a plurality of key points of the face of the target object;
s1202, extracting features from the first fisheye image according to the key points to obtain the face information.
Specifically, the key points are elements for constructing the face of the target object, the face of the target object is composed of a plurality of key points, such as key points of eyes, nose, mouth, ears and the like, all the key points for constructing the face of the target object exist in the first fisheye image, and after the key points of the face of the target object are determined in the first fisheye image, feature extraction can be performed in the first fisheye image through the key points, so that the face information of the target object is obtained.
When determining a plurality of key points of the face of the target object, the first fisheye image can be converted into a gray image, the gray image is processed by using a Gaussian difference algorithm to obtain a DOG value of each pixel point in the first fisheye image, and then the plurality of key points of the face of the target object in the first fisheye image are obtained based on the DOG value.
In addition, in the process of extracting features from the first fisheye image by using the key points, the first fisheye image can be divided into a plurality of image blocks, then the key points are respectively matched with each image block to obtain the image blocks corresponding to the key points, and then the image blocks corresponding to the key points are subjected to feature extraction, so that the face information of the target object in the first fisheye image can be obtained. The feature extraction of the image block can be performed by adopting an initial strategy gradient network, the initial strategy gradient network can comprise a UNet network and a Transformer network, and the convolutional neural network of the UNet structure can better integrate the feature graphs of the low-resolution semantic information and the high-resolution spatial information.
And S130, if the face information is matched with the target object, acquiring a second fisheye image shot by the fisheye camera in the examination room.
Specifically, after face information of a target object in the first fisheye image is extracted, the extracted face information is matched with face information collected in an internal system, and specifically whether the target object is an examinee in the examination room can be determined by calculating the similarity between the extracted face information and the face information, so that the identity recognition of the target object can be realized. Meanwhile, in order to further improve the accuracy of the identification of the target object, after the fact that the target object in the first fisheye image is an examinee in the examination room is confirmed, the fisheye camera is required to shoot the image of the identity document of the target object in the examination room again, namely the second fisheye image mentioned in the application, and the identification of the target object in the examination room is carried out again through the second fisheye image, so that the accuracy of the identification of the target object can be further improved.
And S140, processing the second fisheye image to obtain the identity information of the target object.
Specifically, since the identity document adopted by the examinee in the examination can be an identity card, a household account document or a temporary identity document, after the second fisheye image shot by the fisheye camera in the examination room is acquired, the second fisheye image can be subjected to classification recognition in advance to determine the type of the identity document of the target object in the shot second fisheye image, and then the identity information of the target object in the second fisheye image is acquired according to the corresponding type, so that the identity recognition of the target object is realized again.
In other embodiments of the invention, as shown in fig. 6, step S140 includes S141 and S142.
S141, carrying out distortion correction on the first fisheye image according to the fisheye image correction model to obtain a corrected second fisheye image;
and S142, performing character recognition on the corrected second fish-eye image to obtain the identity information of the target object.
Specifically, since the second fisheye image is also a distorted image, the second fisheye image needs to be distorted, and the model for distortion correction of the second fisheye image may be the fisheye image correction model mentioned in the application, or another image correction model, and may specifically be selected according to practical application.
In this embodiment, the information such as the name, the identification card number, the birth month, the household address and the like in the identification card of the target object is identified from the second fisheye image by adopting the OCR recognition technology, and then the information is matched with the information collected in the internal system to determine whether the identification card is the identification card of the target object, so that the identification of the target object can be realized.
And S150, carrying out identity recognition on the target object according to the identity information so as to determine whether the identity information is matched with the target object.
In this embodiment, the identity information includes information such as name, identification card number, birth year and month, and home registration address of the target object, and whether the target object is an examinee in the examination room is further determined by judging whether the identity information is completely consistent with the information collected in the internal system, so as to further improve accuracy of identity recognition of the examination room.
In other embodiments of the invention, as shown in fig. 7, after step S150, steps S160 and S170 are further included.
S160, if the identity document of the target object exists in the second fisheye image, performing target detection on the corrected second fisheye image to obtain face information of the target object in the second fisheye image;
s170, identifying the target object according to the face information of the target object in the second fisheye image so as to determine whether the face information of the target object in the second fisheye image is matched with the target object.
In this embodiment, since the identity card of the target object in the second fisheye image is mostly an identity card or a temporary identity card, and the face information of the target object exists in the identity card or the temporary identity card, after the text information in the identity card of the target object is identified from the second fisheye image, the face information of the target object can be identified from the identity card, and is matched with the face information identified in step S120 and the face information collected in the internal system, so that the identity recognition of the examinee in the examination room can be more accurately realized.
In the examination room-based identity recognition method provided by the embodiment of the invention, the first fisheye image shot by the fisheye camera in the examination room is obtained, and the first fisheye image is subjected to target detection to obtain the face information of a target object; if the face information is matched with the target object, a second fisheye image shot by the fisheye camera in the examination room is obtained, the second fisheye image is processed to obtain the identity information of the target object, and the identity information is further used for identifying the target object so as to determine whether the identity information is matched with the target object, so that the accuracy of identity verification of the examinee is greatly improved, and the fairness of the examination are realized.
The embodiment of the invention also provides an examination room-based identity recognition device 100 for executing any embodiment of the examination room-based identity recognition method.
Specifically, referring to fig. 8, fig. 8 is a schematic block diagram of an examination room-based identification apparatus 100 according to an embodiment of the present invention.
As shown in fig. 8, the examination room-based identification apparatus 100 comprises: the first acquisition unit 110, the first detection unit 120, the second acquisition unit 130, the first image processing unit 140, and the first recognition unit 150.
The first acquiring unit 110 is configured to acquire a first fisheye image captured by the fisheye camera in the examination room.
The first detection unit 120 is configured to perform target detection on the first fisheye image, so as to obtain face information of a target object.
In other inventive embodiments, the first detection unit 120 includes: the first correcting unit and the second detecting unit.
The first correcting unit is used for carrying out distortion correction on the first fisheye image according to a preset fisheye image correcting model to obtain a corrected first fisheye image; and the second detection unit is used for carrying out target detection on the corrected first fisheye image to obtain the face information of the target object.
In other embodiments of the present invention, the examination room-based identification apparatus 100 further comprises: a third acquisition unit, a construction unit and a distillation unit.
The third acquisition unit is used for acquiring a pre-trained generated type countermeasure network; wherein the generated countermeasure network includes a discriminator and a generator; a construction unit for constructing the fisheye image correction model according to the generator; wherein the generator is used as a teacher model, and the fisheye image correction model is used as a student model; and the distillation unit is used for performing knowledge distillation on the fisheye image correction model according to the generator to obtain a distilled fisheye image correction model.
In other embodiments of the present invention, the examination room-based identification apparatus 100 further comprises: and a second image processing unit and a training unit.
The second image processing unit is used for processing the training image by adopting the internal parameters and the distortion coefficients of the fisheye camera to obtain a distorted training image; and the training unit is used for training the generated type countermeasure network according to the training image and the distorted training image to obtain the trained generated type countermeasure network.
In other inventive embodiments, the first detection unit 120 includes: a determining unit and an extracting unit.
A determining unit, configured to determine a plurality of key points of the face of the target object; and the extraction unit is used for extracting the characteristics from the first fisheye image according to the key points to obtain the face information.
And a second acquiring unit 130, configured to acquire a second fisheye image captured by the fisheye camera in the examination room if the face information is matched with the target object.
And the first image processing unit 140 is configured to process the second fisheye image to obtain identity information of the target object.
In other inventive embodiments, the first image processing unit 140 includes: and the second correcting unit and the character recognition unit.
The second correcting unit is used for carrying out distortion correction on the first fisheye image according to the fisheye image correcting model to obtain a corrected second fisheye image; and the character recognition unit is used for carrying out character recognition on the corrected second fish-eye image to obtain the identity information of the target object.
The first identifying unit 150 is configured to identify the target object according to the identity information, so as to determine whether the identity information matches with the target object.
In other embodiments of the present invention, the examination room-based identification apparatus 100 further comprises: and a third detection unit and a second identification unit.
The third detection unit is used for carrying out target detection on the corrected second fisheye image if the identity document of the target object exists in the second fisheye image, so as to obtain the face information of the target object in the second fisheye image; and the second identification unit is used for carrying out identity identification on the target object according to the face information of the target object in the second fisheye image so as to determine whether the face information of the target object in the second fisheye image is matched with the target object.
The examination room-based identity recognition device 100 provided by the embodiment of the invention is used for executing the above-mentioned acquisition of the first fisheye image shot by the fisheye camera in the examination room; performing target detection on the first fisheye image to obtain face information of a target object; if the face information is matched with the target object, acquiring a second fisheye image shot by the fisheye camera in the examination room; processing the second fisheye image to obtain identity information of the target object; and carrying out identity recognition on the target object according to the identity information so as to determine whether the identity information is matched with the target object.
It should be noted that, as will be clearly understood by those skilled in the art, the specific implementation process of the above-mentioned examination room-based identification apparatus 100 and each unit may refer to the corresponding description in the foregoing method embodiment, and for convenience and brevity of description, the description is omitted here.
The examination room-based identification apparatus described above may be implemented in the form of a computer program that is operable on an electronic device as shown in fig. 9.
Referring to fig. 9, fig. 9 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
Referring to fig. 9, the device 500 includes a processor 502, a memory, and a network interface 505, which are connected by a system bus 501, wherein the memory may include a storage medium 503 and an internal memory 504.
The storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032, when executed, causes the processor 502 to perform a test room based identification method.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall device 500.
The internal memory 504 provides an environment for the execution of a computer program 5032 in the non-volatile storage medium 503, which computer program 5032, when executed by the processor 502, causes the processor 502 to perform an examination room-based identification method.
The network interface 505 is used for network communication, such as providing for transmission of data information, etc. It will be appreciated by those skilled in the art that the structure shown in fig. 9 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the apparatus 500 to which the present inventive arrangements are applied, and that a particular apparatus 500 may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
Wherein the processor 502 is configured to execute a computer program 5032 stored in a memory to perform the following functions: acquiring a first fisheye image shot by a fisheye camera in an examination room; performing target detection on the first fisheye image to obtain face information of a target object; if the face information is matched with the target object, acquiring a second fisheye image shot by the fisheye camera in the examination room; processing the second fisheye image to obtain identity information of the target object; and carrying out identity recognition on the target object according to the identity information so as to determine whether the identity information is matched with the target object.
In an embodiment, when the processor 502 performs the target detection on the first fisheye image to obtain face information of the target object, the following steps are specifically further implemented: carrying out distortion correction on the first fisheye image according to a preset fisheye image correction model to obtain a corrected first fisheye image; and carrying out target detection on the corrected first fisheye image to obtain the face information of the target object.
In an embodiment, before implementing the distortion correction on the first fisheye image according to the preset image correction model, the processor 502 further specifically implements the following steps: acquiring a pre-trained generated countermeasure network; wherein the generated countermeasure network includes a discriminator and a generator; constructing the fisheye image correction model according to the generator; wherein the generator is used as a teacher model, and the fisheye image correction model is used as a student model; and carrying out knowledge distillation on the fisheye image correction model according to the generator to obtain a distilled fisheye image correction model.
In one embodiment, the processor 502 further performs the following steps before implementing the step of acquiring the pre-trained generated countermeasure network: processing the training image by adopting internal parameters and distortion coefficients of a fisheye camera to obtain a distorted training image; and training the generated type countermeasure network according to the training image and the distorted training image to obtain the trained generated type countermeasure network.
In an embodiment, when the processor 502 performs the target detection on the first fisheye image to obtain face information of the target object, the following steps are specifically further implemented: determining a plurality of key points of the target object face; and extracting features from the first fisheye image according to the key points to obtain the face information.
In an embodiment, when the processor 502 performs the processing on the second fisheye image to obtain the identity information of the target object, the following steps are specifically further implemented: carrying out distortion correction on the first fisheye image according to the fisheye image correction model to obtain a corrected second fisheye image; and performing character recognition on the corrected second fish-eye image to obtain the identity information of the target object.
In an embodiment, after implementing the identification of the target object according to the identity information to determine whether the identity information matches the target object, the processor 502 specifically further implements the following steps: if the identity document of the target object exists in the second fisheye image, performing target detection on the corrected second fisheye image to obtain face information of the target object in the second fisheye image; and carrying out identity recognition on the target object according to the face information of the target object in the second fisheye image so as to determine whether the face information of the target object in the second fisheye image is matched with the target object.
Those skilled in the art will appreciate that the embodiment of the apparatus 500 shown in fig. 9 is not limiting of the specific construction of the apparatus 500, and in other embodiments, the apparatus 500 may include more or less components than illustrated, or certain components may be combined, or a different arrangement of components. For example, in some embodiments, the device 500 may include only the memory and the processor 502, and in such embodiments, the structure and the function of the memory and the processor 502 are consistent with the embodiment shown in fig. 9, and will not be described herein.
It should be appreciated that in an embodiment of the invention, the processor 502 may be a central processing unit (Central Processing Unit, CPU), the processor 502 may also be other general purpose processors 502, digital signal processors 502 (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor 502 may be the microprocessor 502 or the processor 502 may be any conventional processor 502 or the like.
In another embodiment of the invention, a computer storage medium is provided. The storage medium may be a nonvolatile computer-readable storage medium or a volatile storage medium. The storage medium stores a computer program 5032, wherein the computer program 5032 when executed by the processor 502 performs the steps of: acquiring a first fisheye image shot by a fisheye camera in an examination room; performing target detection on the first fisheye image to obtain face information of a target object; if the face information is matched with the target object, acquiring a second fisheye image shot by the fisheye camera in the examination room; processing the second fisheye image to obtain identity information of the target object; and carrying out identity recognition on the target object according to the identity information so as to determine whether the identity information is matched with the target object.
In an embodiment, when the processor executes the program instruction to perform the target detection on the first fisheye image to obtain face information of the target object, the method specifically further includes the following steps: carrying out distortion correction on the first fisheye image according to a preset fisheye image correction model to obtain a corrected first fisheye image; and carrying out target detection on the corrected first fisheye image to obtain the face information of the target object.
In an embodiment, before the processor executes the program instructions to implement the distortion correction on the first fisheye image according to the preset image correction model, the method specifically further includes the following steps: acquiring a pre-trained generated countermeasure network; wherein the generated countermeasure network includes a discriminator and a generator; constructing the fisheye image correction model according to the generator; wherein the generator is used as a teacher model, and the fisheye image correction model is used as a student model; and carrying out knowledge distillation on the fisheye image correction model according to the generator to obtain a distilled fisheye image correction model.
In an embodiment, before executing the program instructions to implement the acquiring the pre-trained generated challenge network, the processor specifically further implements the following steps: processing the training image by adopting internal parameters and distortion coefficients of a fisheye camera to obtain a distorted training image; and training the generated type countermeasure network according to the training image and the distorted training image to obtain the trained generated type countermeasure network.
In an embodiment, when the processor executes the program instruction to perform the target detection on the first fisheye image to obtain face information of the target object, the method specifically further includes the following steps: determining a plurality of key points of the target object face; and extracting features from the first fisheye image according to the key points to obtain the face information.
In an embodiment, when the processor executes the program instruction to implement the processing of the second fisheye image to obtain the identity information of the target object, the method specifically further includes the following steps: carrying out distortion correction on the first fisheye image according to the fisheye image correction model to obtain a corrected second fisheye image; and performing character recognition on the corrected second fish-eye image to obtain the identity information of the target object.
In an embodiment, after executing the program instructions to implement the identifying the target object according to the identity information to determine whether the identity information matches the target object, the processor specifically further implements the following steps: if the identity document of the target object exists in the second fisheye image, performing target detection on the corrected second fisheye image to obtain face information of the target object in the second fisheye image; and carrying out identity recognition on the target object according to the face information of the target object in the second fisheye image so as to determine whether the face information of the target object in the second fisheye image is matched with the target object.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus, device and unit described above may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein. Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the units is merely a logical function division, there may be another division manner in actual implementation, or units having the same function may be integrated into one unit, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices, or elements, or may be an electrical, mechanical, or other form of connection.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment of the present invention.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units may be stored in a storage medium if implemented in the form of software functional units and sold or used as stand-alone products. Based on such understanding, the technical solution of the present invention may be essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing an apparatus 500 (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. An examination room-based identity recognition method is characterized by comprising the following steps:
acquiring a first fisheye image shot by a fisheye camera in an examination room;
performing target detection on the first fisheye image to obtain face information of a target object;
if the face information is matched with the target object, acquiring a second fisheye image shot by the fisheye camera in the examination room;
processing the second fisheye image to obtain identity information of the target object;
and carrying out identity recognition on the target object according to the identity information so as to determine whether the identity information is matched with the target object.
2. The examination room-based identification method according to claim 1, wherein the performing object detection on the first fisheye image to obtain face information of an object comprises:
Carrying out distortion correction on the first fisheye image according to a preset fisheye image correction model to obtain a corrected first fisheye image;
and carrying out target detection on the corrected first fisheye image to obtain the face information of the target object.
3. An examination room-based identification method according to claim 2, wherein before the distortion correction is performed on the first fisheye image according to a preset image correction model, the method further comprises:
acquiring a pre-trained generated countermeasure network; wherein the generated countermeasure network includes a discriminator and a generator;
constructing the fisheye image correction model according to the generator; wherein the generator is used as a teacher model, and the fisheye image correction model is used as a student model;
and carrying out knowledge distillation on the fisheye image correction model according to the generator to obtain a distilled fisheye image correction model.
4. An examination room-based identification method as defined in claim 2, further comprising, prior to the acquiring the pre-trained generated challenge network:
processing the training image by adopting internal parameters and distortion coefficients of a fisheye camera to obtain a distorted training image;
And training the generated type countermeasure network according to the training image and the distorted training image to obtain the trained generated type countermeasure network.
5. The examination room-based identification method according to claim 1, wherein the performing object detection on the first fisheye image to obtain face information of an object comprises:
determining a plurality of key points of the target object face;
and extracting features from the first fisheye image according to the key points to obtain the face information.
6. The examination room-based identity recognition method of claim 2, wherein the processing the second fisheye image to obtain the identity information of the target object comprises:
carrying out distortion correction on the first fisheye image according to the fisheye image correction model to obtain a corrected second fisheye image;
and performing character recognition on the corrected second fish-eye image to obtain the identity information of the target object.
7. An examination room-based identification method as defined in claim 6, further comprising, after the identifying the target object based on the identity information to determine whether the identity information matches the target object:
If the identity document of the target object exists in the second fisheye image, performing target detection on the corrected second fisheye image to obtain face information of the target object in the second fisheye image;
and carrying out identity recognition on the target object according to the face information of the target object in the second fisheye image so as to determine whether the face information of the target object in the second fisheye image is matched with the target object.
8. An examination room-based identification device, comprising:
the first acquisition unit is used for acquiring a first fisheye image shot by the fisheye camera in the examination room;
the first detection unit is used for carrying out target detection on the first fisheye image to obtain face information of a target object;
the second acquisition unit is used for acquiring a second fisheye image shot by the fisheye camera in the examination room if the face information is matched with the target object;
the first image processing unit is used for processing the second fisheye image to obtain the identity information of the target object;
and the first identification unit is used for carrying out identity identification on the target object according to the identity information so as to determine whether the identity information is matched with the target object.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the examination room-based identification method of any one of claims 1 to 7 when the computer program is executed by the processor.
10. A computer readable storage medium, wherein the computer readable storage medium stores a computer program which, when executed by a processor, causes the processor to perform an examination room-based identification method as claimed in any one of claims 1 to 7.
CN202310384409.7A 2023-04-06 2023-04-06 Examination room-based identity recognition method and device, electronic equipment and storage medium Pending CN116453185A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310384409.7A CN116453185A (en) 2023-04-06 2023-04-06 Examination room-based identity recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310384409.7A CN116453185A (en) 2023-04-06 2023-04-06 Examination room-based identity recognition method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116453185A true CN116453185A (en) 2023-07-18

Family

ID=87121468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310384409.7A Pending CN116453185A (en) 2023-04-06 2023-04-06 Examination room-based identity recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116453185A (en)

Similar Documents

Publication Publication Date Title
CN110490076B (en) Living body detection method, living body detection device, computer equipment and storage medium
US11017210B2 (en) Image processing apparatus and method
CN110232326B (en) Three-dimensional object recognition method, device and storage medium
CN112668519A (en) Abnormal face recognition living body detection method and system based on MCCAE network and Deep SVDD network
JP6071002B2 (en) Reliability acquisition device, reliability acquisition method, and reliability acquisition program
JP2020184331A (en) Liveness detection method and apparatus, face authentication method and apparatus
CN111783629A (en) Human face in-vivo detection method and device for resisting sample attack
CN117121068A (en) Personalized biometric anti-fraud protection using machine learning and enrollment data
CN112651333B (en) Silence living body detection method, silence living body detection device, terminal equipment and storage medium
CN113095156B (en) Double-current network signature identification method and device based on inverse gray scale mode
CN111222380A (en) Living body detection method and device and recognition model training method thereof
CN112613471B (en) Face living body detection method, device and computer readable storage medium
CN111353325A (en) Key point detection model training method and device
CN111046755A (en) Character recognition method, character recognition device, computer equipment and computer-readable storage medium
JP2020098588A (en) Curvilinear object segmentation with noise priors
CN112633113B (en) Cross-camera human face living body detection method and system
CN112926508B (en) Training method and device of living body detection model
CN112308035A (en) Image detection method, image detection device, computer equipment and storage medium
CN112767403A (en) Medical image segmentation model training method, medical image segmentation method and device
CN116386117A (en) Face recognition method, device, equipment and storage medium
CN116229528A (en) Living body palm vein detection method, device, equipment and storage medium
CN116704401A (en) Grading verification method and device for operation type examination, electronic equipment and storage medium
JP2020098589A (en) Curvilinear object segmentation with geometric priors
CN108875467B (en) Living body detection method, living body detection device and computer storage medium
CN113657293B (en) Living body detection method, living body detection device, electronic equipment, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination