CN114241575B - AI-based deep learning big data face recognition system - Google Patents

AI-based deep learning big data face recognition system Download PDF

Info

Publication number
CN114241575B
CN114241575B CN202111595487.9A CN202111595487A CN114241575B CN 114241575 B CN114241575 B CN 114241575B CN 202111595487 A CN202111595487 A CN 202111595487A CN 114241575 B CN114241575 B CN 114241575B
Authority
CN
China
Prior art keywords
face
image
data
characteristic
feature data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111595487.9A
Other languages
Chinese (zh)
Other versions
CN114241575A (en
Inventor
郑飞
胡伟
蔡强
杨凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Intelligent Computing Information Technology Co ltd
Original Assignee
Guangzhou Intelligent Computing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Intelligent Computing Information Technology Co ltd filed Critical Guangzhou Intelligent Computing Information Technology Co ltd
Priority to CN202111595487.9A priority Critical patent/CN114241575B/en
Publication of CN114241575A publication Critical patent/CN114241575A/en
Application granted granted Critical
Publication of CN114241575B publication Critical patent/CN114241575B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a big data face recognition system based on deep learning of AI, comprising: the image extraction module is used for extracting a video frame image according to the acquired video data; the face recognition module is used for carrying out face recognition processing according to the acquired video frame image, carrying out feature extraction processing on the recognized face to obtain face feature data, and generating faceID face identification information according to the face feature data; the association processing module is used for associating the obtained faceID face identification information, the time information TimeInfo and the shooting place information SiteInfo of the video frame image into the corresponding human ID character identification to generate character association spatiotemporal data; the data management module is used for storing and managing the human-object associated spatiotemporal data. The invention relates the identified face information and the spatio-temporal data, which is beneficial to improving the adaptability and the intelligent level of video image data processing and is suitable for the requirements of different application scenes.

Description

AI-based deep learning big data face recognition system
Technical Field
The invention relates to the technical field of face recognition, in particular to a big data face recognition system based on AI deep learning.
Background
At present, the application of intelligent processing of video images is more and more extensive, and the video image processing technology is also paid more and more attention by researchers. Currently, the video image processing technology can automatically identify information in an image according to obtained video image data, and provides a basis for further intelligent control and intelligent processing.
In the prior art, the technical means of face recognition for video images is generally single in performance and low in adaptability, and cannot meet the requirement of artificial intelligent processing of modern video images.
Disclosure of Invention
In view of the above problems, the present invention aims to provide a big data face recognition system based on deep learning of AI.
The purpose of the invention is realized by adopting the following technical scheme:
the invention provides a big data face recognition system based on AI deep learning, comprising: the system comprises an image extraction module, a face recognition module, an association processing module and a data management module;
the image extraction module is used for extracting video frame images according to the acquired video data;
the face recognition module is used for carrying out face recognition processing according to the acquired video frame image, carrying out feature extraction processing on the recognized face to obtain face feature data, and generating faceID face identification information according to the face feature data;
the association processing module is used for associating the obtained faceID face identification information and the spatio-temporal information of the video frame image into a corresponding human character ID to construct character association spatio-temporal data; the time-space data of the video frame image comprises time information TimeInfo and shooting location information SiteInfo;
the data management module is used for storing and managing the human-object associated spatiotemporal data.
In one embodiment, the image extraction module includes a receiving unit and a video frame extraction unit,
the receiving unit is used for receiving video image data; the video image data carries shooting time information and shooting place information;
the video frame extraction unit is used for extracting video frame images according to the video data.
In one embodiment, the face recognition module comprises a face detection unit and a feature extraction unit
The face detection unit is used for detecting a face part in the video frame image and extracting a face region image;
and the feature extraction unit is used for extracting the face features of the extracted face region image to obtain face feature data and performing faceID face identification according to the face feature data.
In one embodiment, the feature extraction unit comprises:
matching and comparing the acquired face characteristic data with face characteristic data corresponding to a face contained in a previous frame of video frame image, and if the face with the face characteristic data similarity larger than a threshold range is matched, identifying the face in the current frame of image according to the faceID of the corresponding face in the previous frame of video image; if the face with the face feature data similarity larger than the threshold range cannot be matched, marking a new faceID face identification for the corresponding face according to the currently acquired face feature data;
in one embodiment, the feature extraction unit includes:
and matching the acquired face characteristic data with face characteristic data prestored in a face database, and acquiring the faceID of the matched face to identify the face in the current frame image.
In one embodiment, the association processing module includes an association unit,
the association unit is used for establishing a HumanID identification for a person corresponding to the faceID when the faceID appears in the video data for the first time, and associating time information TimeInfo and shooting location information SiteInfo corresponding to a video frame image in which the person appears with the corresponding HumanID identification to generate the person-associated spatiotemporal data of the person appearing in the video.
In one embodiment, the data management module includes a storage unit;
the storage unit is used for performing associated storage on the HumanID identification appearing in the video data and corresponding time information TimeInfo and shooting place information SiteInfo to construct an associated database.
In one embodiment, the system further comprises a query module,
the query module is used for querying from the association database according to one or more items of the shooting time information, the shooting place information and the person identification information, and acquiring other associated information associated with the query information.
The invention has the beneficial effects that:
the invention provides a technical scheme for processing video images, wherein video frame images are extracted from obtained video data, and face identification processing is carried out based on video frame image memorability, when face images appear in a video, the faces are automatically identified, and meanwhile, time information and place information corresponding to the face identifications and the video images are further associated to corresponding person identifications, so that time and place association of persons appearing in the video is facilitated, the obtained associated data is uniformly stored and managed, and one or two of the person data, the time data and the space data can be designated to inquire other data. Lays a foundation for further extended processing (such as behavior recognition, track recognition and the like) according to the associated data. The method is beneficial to improving the adaptability and the intelligent level of video image data processing and adapting to the requirements of different application scenes.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
Fig. 1 is a frame structure diagram of a big data face recognition system based on AI deep learning according to the present invention.
Detailed Description
The invention is further described in connection with the following application scenarios.
Referring to fig. 1, an embodiment of the invention provides a big data face recognition system based on AI deep learning, which includes: the system comprises an image extraction module, a face recognition module, an association processing module and a data management module;
the image extraction module is used for extracting video frame images according to the acquired video data;
the face recognition module is used for carrying out face recognition processing according to the acquired video frame image, carrying out feature extraction processing on the recognized face to obtain face feature data, and generating faceID face identification information according to the face feature data;
the association processing module is used for associating the obtained faceID face identification information and the spatio-temporal information of the video frame image into a corresponding human character ID to construct character association spatio-temporal data; the spatio-temporal data of the video frame image comprise time information TimeInfo and shooting location information SiteInfo;
the data management module is used for storing and managing the human-object associated spatiotemporal data.
The above embodiment of the present invention provides a technical solution for processing a video image, wherein a video frame image is extracted for obtained video data, and based on video frame image memorability face recognition processing, when a face image appears in a video, the face is automatically identified, and meanwhile, time information and location information corresponding to the face identification and the video image are further associated to a corresponding person identification, which is helpful for associating time and location with respect to a person appearing in the video, and performing unified storage management on the obtained associated data, and one or two of the person data, the time data and the space data can be designated to query other data. Lays a foundation for further extended processing (such as behavior recognition, track recognition and the like) according to the associated data. The method is beneficial to improving the adaptability and the intelligent level of video image data processing and is suitable for the requirements of different application scenes.
In one embodiment, the image extraction module includes a receiving unit and a video frame extraction unit,
the receiving unit is used for receiving video image data; the video image data carries shooting time information and shooting place information;
the video frame extraction unit is used for extracting video frame images according to the video data.
In one scenario, for video stream data, frame processing is performed on the received video stream data to obtain each video frame image.
Wherein the video frame extracting unit also performs preprocessing, such as region division processing, on the extracted video frame image.
In a scene, dividing an image by adopting a preset area division rule according to an obtained video frame image, dividing the image into a controlled area and an open area, further detecting and distinguishing the access condition of authorized/unauthorized persons aiming at a video picture of the controlled area, and providing a behavior track record of a target task. And actually recording people stream track data aiming at the open area, performing track playback of specific people, and recording the interest tags of the people according to the track analysis of the people.
In one embodiment, the face recognition module comprises a face detection unit and a feature extraction unit;
the face detection unit is used for detecting a face part in the video frame image and extracting a face region image;
and the feature extraction unit is used for extracting the face features of the extracted face region image to obtain face feature data and performing faceID face identification according to the face feature data.
In one scenario, the face detection unit detects a face portion in a video frame image based on an AI engine to obtain a face region image. The feature extraction unit extracts feature vectors of the face region based on the neural network model, and performs faceID face identification on the face appearing in the video image according to the obtained feature parameters.
In one embodiment, the feature extraction unit comprises:
matching and comparing the acquired face characteristic data with face characteristic data corresponding to a face contained in a previous frame of video frame image, and if the face with the face characteristic data similarity larger than a threshold range is matched, identifying the face in the current frame of image according to the faceID of the corresponding face in the previous frame of video image; if the face with the face feature data similarity larger than the threshold range cannot be matched, marking a new faceID face identification for the corresponding face according to the currently acquired face feature data;
in one embodiment, the feature extraction unit includes:
and matching the acquired face characteristic data with face characteristic data prestored in a face database, and acquiring the faceID of the matched face to identify the face in the current frame image.
According to different application scenes, the feature extraction unit can perform faceID face identification on the detected face in different modes, for example, when the obtained video image is an open video image, namely an area acquired in the video image is an open area, and the identity of a person appearing in the video image cannot be judged in advance, the feature extraction unit extracts face feature data according to the detected face area, compares the obtained feature data with the feature data of the face appearing in the previous frame (or multiple frames) of video frame image, and when the similarity of the feature data is higher than a set threshold value, judges that the face is the same person and identifies the face by the same faceID; if the face with similar characteristic data is not detected in the previous frame (or multiple frames) of video frame image, judging that the face enters the video frame for the first time, and marking a new faceID for the face to distinguish the persons appearing in the video. For example, when the video image data is a video image for a preset environment, or a database is constructed for a specific person or a person who has appeared to record related FaceID identification information, the feature extraction unit performs matching comparison between the obtained face feature data of the face region and face feature data prestored in the database, and when the face feature data with similarity higher than a set standard is detected, acquires a corresponding FaceID from the database to identify the face. And if the similarity is not detected to meet the standard, identifying the face by adopting a new faceID.
In one embodiment, in the feature extraction unit,
matching and comparing the acquired face feature data with target face feature data, wherein the target face feature data comprise face feature data of a target image, the target image comprises a previous video frame image or an image prestored in a face database, and the target face feature data comprise face feature data corresponding to a face contained in the previous video frame image or face feature data prestored in the face database; the method specifically comprises the following steps:
1) Acquiring a face region image;
2) Carrying out size adjustment and definition adjustment on the acquired face region image, and adjusting the processed face region image;
3) Carrying out gray processing on the adjusted face region image to obtain a gray characteristic image, comprising the following steps:
converting the adjusted face region image from an RGB color space to a gray scale space, wherein the adopted gray scale feature conversion function is as follows:
Figure BDA0003430384180000051
in the formula, h (x, y) represents the gray value of the pixel (x, y), and R (x, y), G (x, y), j (x, y) represent the component values of R, G, B of the pixel (x, y), respectively; max γ=r,g,j (gamma (x, y)) represents the maximum value, min, among the R, G, B component values of the pixel (x, y) γ=r,g,j (gamma (x, y)) represents the minimum value, omega, of the R, G, B component values of a pixel (x, y) 1 And ω 2 Represents a set weight factor, where ω 12 =1,ω 1 ∈[0.65,0.8],ω 2 ∈[0.2,0.35];h j (x, y) represents the standard gray scale value of the pixel point (x, y), wherein h j (x,y)=[0.299r(x,y)+0.587g(x,y)+0.114j(x,y)];
Obtaining a gray characteristic image according to the gray value of each pixel point;
4) Inputting the gray characteristic image into a CNN model to obtain a first CNN characteristic output by the CNN model;
and calculating the characteristic distance between the first CNN characteristic and a first target CNN characteristic corresponding to the target image, and confirming the first characteristic matching degree according to the obtained characteristic distance.
The CNN model is a CNN model with a 9-layer structure and comprises an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a third convolution layer, a third pooling layer, a first full-connection layer, a second full-connection layer and an output layer which are connected in sequence;
wherein the input of the input layer is a gray characteristic image; the second full-connection layer outputs a feature vector reflecting the features of the input image, the feature vector is output by the output layer, and the distance between the two feature vectors is calculated according to the feature vector obtained based on the target image, so that the first feature matching degree is obtained.
5) Performing asymmetric coding processing based on the gray characteristic image obtained in step S4 to obtain a coded characteristic image, including:
performing two-dimensional Gabor wavelet transform on the gray characteristic image based on the obtained gray characteristic image to obtain a Gabor wavelet image;
acquiring a first coding matrix of each pixel point in the Gabor wavelet image, wherein the first coding matrix acquiring function is as follows:
Figure BDA0003430384180000061
wherein G is 1 (x, y) represents a first coding matrix of the pixel (x, y), and h (x, y) represents the gray value of the pixel (x, y); wherein the matrix G 1 (x, y) comprises 1 central point and 8 points, and sequentially 8 neighborhood points are numbered as 1,2, \ 8230;, 8;
based on the obtained first coding matrix, neighborhood feature coding is further carried out based on the first coding matrix to obtain a second coding feature k 2 ={k 2 (1),k 2 (2),…,k 2 (8)}:
Figure BDA00034303841800000611
In the formula, k 2 (n) denotes the nth element of the second coding feature, where c (n) denotes the first coding matrix G 1 The n-th neighborhood point correlation characteristic value in (x, y), where c (n) = | G 1 (n)-G 1 (0)|,G 1 (n) denotes a first coding matrix G 1 Gray value corresponding to nth neighborhood point in (x, y), G 1 (0) Representing a first coding matrix G 1 (x, y) a gray value of the center point;
Figure BDA00034303841800000612
representing a first coding matrix G 1 Average value, sigma, of correlation characteristic values of each neighborhood point in (x, y) c First coding matrix G 1 (x, y) the standard deviation of the correlation characteristic value of each neighborhood point;
Figure BDA0003430384180000063
represents a step function of
Figure BDA0003430384180000064
When the temperature of the water is higher than the set temperature,
Figure BDA0003430384180000065
when in use
Figure BDA0003430384180000066
When the temperature of the water is higher than the set temperature,
Figure BDA0003430384180000067
when in use
Figure BDA0003430384180000068
Figure BDA0003430384180000069
When the temperature of the water is higher than the set temperature,
Figure BDA00034303841800000610
based on the second coding property k 2 Calculating the coding characteristic value of the pixel point (x, y):
Figure BDA0003430384180000062
in the formula, f (x, y) represents the coding characteristic value of the pixel point (x, y);
and forming the encoding characteristic image based on the encoding characteristic values of the pixels.
Inputting the coding characteristic image into a CNN model to obtain a second CNN characteristic output by the CNN model;
and calculating the characteristic distance between the second CNN characteristic and a second target CNN characteristic corresponding to the target image, and confirming the second characteristic matching degree according to the obtained characteristic distance.
6) According to a preset calculation rule, comprehensively fusing the first feature matching degree and the second feature matching degree to obtain the similarity between the face feature data of the acquired face region image and the face feature data of the target image; the method comprises the following steps:
the similarity between the face feature data of the acquired face region image and the face feature data of the target image is obtained by adopting the following formula through comprehensive fusion:
Y=ω 3 ×Y 14 ×Y 2
in the formula, Y represents the similarity between the face feature data of the acquired face region image and the face feature data of the target image, and Y represents the similarity between the face feature data of the acquired face region image and the face feature data of the target image 1 Represents a first feature matching degree, Y 2 Representing a second degree of feature matching, ω 3 And omega 4 Respectively represent the set weight values, where 34 =1。
7) And if the similarity between the face feature data of the acquired face region image and the face feature data of the target image is greater than a set standard threshold, taking the faceID of the face corresponding to the target image as the faceID of the face corresponding to the acquired face region image.
In the above embodiment, a technical solution is provided for performing feature extraction on a face region image, and performing similarity matching based on extracted face feature data to obtain a FaceID representation, where in the process of performing feature extraction on the face region image, local gray features and local texture features in the face region image are reflected respectively based on gray features and asymmetric coding features, and meanwhile, similarity matching is performed based on two different features respectively corresponding to target images, so as to obtain a similarity between a face corresponding to a final face region image and a face corresponding to the target image, complete FaceID identification, and facilitate accuracy of identification of people appearing in a video image.
When the human face image is subjected to graying processing, the gray characteristic conversion function specially used for graying processing of the human face region image is provided, the characteristic distance of the video frame image in RGB (red, green and blue) different color spaces can be considered in the characteristic, and a characteristic distance evaluation part used for reflecting the human face characteristic is particularly added, so that the gray conversion effect of a naturally formed shielding part in the human face part of a user can be improved, the condition that the human face characteristic of the user is lost in the graying process can be avoided, and the effect of human face recognition can be improved.
Meanwhile, a technical scheme for further carrying out asymmetric coding on the acquired gray-scale feature image is provided, wherein in the technical scheme, two-dimensional Gabor wavelet transformation is carried out on the basis of the gray-scale feature image to obtain a Gabor wavelet image, the Gabor wavelet image can effectively reflect texture features in a face region image, meanwhile, a first coding matrix for each pixel point is jointly acquired on the basis of gray-scale features of each pixel point in the image and other pixel points in the neighborhood range of the pixel point, a coding feature feedback function is provided on the basis of the first coding matrix, a technical scheme for further extracting texture change features reflected by each pixel point in the image on the basis of the obtained first coding matrix is adopted to feed back the texture change features of each pixel point, the texture features are converted into feature values in the range of 0-255 to be expressed, the coding feature image capable of feeding back image feature texture change is restored, the subsequent coding feature image based on the obtained is used for extracting corresponding features through a model based on the coding feature value obtained on the first coding dimension to further extract CNN, the CNN feature value is further converted into feature values to represent the CNN, the CNN characteristic values are extracted, the face feature vector matching, the face characteristic feedback of the face change is accurately extracted, and the face characteristic feedback image, and the face characteristic vector adaptation of the face characteristic can be accurately extracted, and the face characteristic of the face vector can be improved.
In one embodiment, the association processing module includes an association unit,
the association unit is used for establishing a HumanID identification for a person corresponding to the faceID when the faceID appears in the video data for the first time, and associating time information TimeInfo and shooting location information SiteInfo corresponding to a video frame image where the person appears with the corresponding HumanID identification to generate the spatio-temporal data associated with the person appearing in the video.
In a scene, the time information TimeInfo can be accurate to the millisecond level according to the time information corresponding to the video frame image; the shooting location information SiteInfo may be obtained from location information corresponding to a shooting device that shoots the video frame image, and may be a preset correspondence of shooting location information and the shooting device.
In one embodiment, the data management module includes a storage unit;
the storage unit is used for performing associated storage on the HumanID identifier appearing in the video data and the corresponding time information TimeInfo and shooting place information SiteInfo to construct an associated database.
Among the three, the person data (HumanID identifier), the time data (time information TimeInfo), and the space data (place information SiteInfo) may be specified one or two of them, and the other data may be searched.
In one embodiment, the system further comprises a query module,
the query module is used for querying from the association database according to one or more items of the shooting time information, the shooting place information and the person identification information, and acquiring other associated information associated with the query information.
Meanwhile, the face recognition system of the invention can further expand other functions, for example, 1) besides the face recognition being a person, more attributes such as sex, age, whether to wear a hat, wear glasses, dressing, etc. can be further recognized; 2) Can be further extended to identify the action of personnel, such as walking, grasping objects, industrial operations, etc. 3) can be further extended to the identification of other objects, such as vehicles, buildings, industrial objects, etc.; and associating the extended and identified content with a HumanID identifier, time information TimeInfo, location information SiteInfo and the like to construct a spatio-temporal data association database suitable for different scenes.
It should be noted that, functional units/modules in the embodiments of the present invention may be integrated into one processing unit/module, or each unit/module may exist alone physically, or two or more units/modules are integrated into one unit/module. The integrated unit/module may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit/module.
From the above description of the embodiments, it is clear for a person skilled in the art that the embodiments described herein can be implemented in hardware, software, firmware, middleware, code or any appropriate combination thereof. For a hardware implementation, a processor may be implemented in one or more of the following units: an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, other electronic units designed to perform the functions described herein, or a combination thereof. For a software implementation, some or all of the procedures of an embodiment may be performed by a computer program instructing associated hardware. In practice, the program may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. Computer-readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be analyzed by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (5)

1. An AI-based deep learning big data face recognition system, comprising: the system comprises an image extraction module, a face recognition module, an association processing module and a data management module;
the image extraction module is used for extracting video frame images according to the acquired video data;
the face recognition module is used for carrying out face recognition processing according to the acquired video frame image, carrying out feature extraction processing on the recognized face to obtain face feature data, and generating faceID face identification information according to the face feature data;
the association processing module is used for associating the obtained faceID face identification information and the spatio-temporal information of the video frame image into a corresponding human character ID to construct character association spatio-temporal data; the time-space data of the video frame image comprises time information TimeInfo and shooting location information SiteInfo;
the data management module is used for storing and managing the human-object associated spatiotemporal data;
wherein the face recognition module comprises a face detection unit and a feature extraction unit
The face detection unit is used for detecting a face part in the video frame image and extracting a face region image;
the feature extraction unit is used for extracting the face features of the extracted face region image to obtain face feature data and performing faceID face identification according to the face feature data;
the feature extraction unit includes:
matching and comparing the acquired face feature data with face feature data corresponding to a face contained in a previous frame of video frame image, and identifying the face in the current frame of image according to the faceID of the corresponding face in the previous frame of video image if the face with the face feature data similarity larger than the threshold range is matched; if the face with the face feature data similarity larger than the threshold range cannot be matched, marking a new faceID face identification for the corresponding face according to the currently acquired face feature data;
and/or the presence of a gas in the gas,
matching the acquired face feature data with face feature data prestored in a face database, and acquiring a faceID of the matched face to identify the face in the current frame image;
wherein, in the feature extraction unit,
matching and comparing the acquired face feature data with target face feature data, wherein the target face feature data comprises face feature data of a target image, the target image comprises a previous frame video frame image or an image prestored in a face database, and the target face feature data comprises face feature data corresponding to a face contained in the previous frame video frame image or face feature data prestored in the face database; the method specifically comprises the following steps:
1) Acquiring a face region image;
2) Carrying out size adjustment and definition adjustment on the acquired face region image to obtain an adjusted face region image;
3) Carrying out gray processing on the adjusted face region image to obtain a gray characteristic image, comprising the following steps:
converting the adjusted face region image from an RGB color space to a gray scale space, wherein the adopted gray scale feature conversion function is as follows:
Figure FDA0003846572960000021
in the formula, h (x, y) represents the gray value of the pixel (x, y), and R (x, y), G (x, y), j (x, y) represent the component values of R, G, B of the pixel (x, y), respectively; max γ=r,g,j (gamma (x, y)) represents the maximum value, min, among the R, G, B component values of the pixel (x, y) γ=r,g,j (gamma (x, y)) represents the minimum value, omega, of the R, G, B component values of a pixel point (x, y) 1 And ω 2 Represents a set weight factor, where ω 12 =1,ω 1 ∈[0.65,0.8],ω 2 ∈[0.2,0.35];h j (x, y) represents the standard gray scale value of the pixel point (x, y), wherein h j (x,y)=[0.299r(x,y)+0.587g(x,y)+0.114j(x,y)];
Obtaining a gray characteristic image according to the gray value of each pixel point;
4) Inputting the gray characteristic image into a CNN model to obtain a first CNN characteristic output by the CNN model;
calculating a characteristic distance between the first CNN characteristic and a first target CNN characteristic corresponding to the target image, and determining a first characteristic matching degree according to the obtained characteristic distance;
5) Carrying out asymmetric coding processing on the gray characteristic image obtained in the step 3) to obtain a coding characteristic image, wherein the coding characteristic image comprises the following steps:
performing two-dimensional Gabor wavelet transformation on the gray characteristic image based on the obtained gray characteristic image to obtain a Gabor wavelet image;
acquiring a first coding matrix of each pixel point in the Gabor wavelet image, wherein the first coding matrix acquiring function is as follows:
Figure FDA0003846572960000022
wherein G is 1 (x, y) represents a first coding matrix of the pixel (x, y), and h (x, y) represents the gray value of the pixel (x, y); wherein the matrix G 1 (x, y) comprises 1 central point and 8 points, and sequentially 8 neighborhood points are numbered as 1,2, \ 8230;, 8;
based on the obtained first coding matrix, neighborhood feature coding is further carried out based on the first coding matrix to obtain a second coding feature k 2 ={k 2 (1),k 2 (2),…,k 2 (8)}:
Figure FDA0003846572960000023
In the formula, k 2 (n) denotes the nth element of the second coding feature, where c (n) denotes the first coding matrix G 1 In (x, y)Corresponding to the n-th neighborhood point correlation eigenvalue, wherein c (n) = | G 1 (n)-G 1 (0)|,G 1 (n) denotes a first coding matrix G 1 Gray value corresponding to nth neighborhood point in (x, y), G 1 (0) Representing a first coding matrix G 1 (x, y) a gray value of the center point;
Figure FDA0003846572960000031
representing a first coding matrix G 1 Average value, sigma, of correlation characteristic values of each neighborhood point in (x, y) c First coding matrix G 1 (x, y) the standard deviation of the correlation characteristic value of each neighborhood point;
Figure FDA0003846572960000032
represents a step function of
Figure FDA0003846572960000033
When the utility model is used, the water is discharged,
Figure FDA0003846572960000034
when in use
Figure FDA0003846572960000035
When the temperature of the water is higher than the set temperature,
Figure FDA0003846572960000036
when in use
Figure FDA0003846572960000037
Figure FDA0003846572960000038
When the utility model is used, the water is discharged,
Figure FDA0003846572960000039
based on the second coding property k 2 Calculating the coding characteristic value of the pixel point (x, y):
Figure FDA00038465729600000310
in the formula, f (x, y) represents the coding characteristic value of the pixel point (x, y);
forming an encoding characteristic image based on the encoding characteristic values of the pixel points;
inputting the coding characteristic image into a CNN model to obtain a second CNN characteristic output by the CNN model;
calculating a characteristic distance between the second CNN characteristic and a second target CNN characteristic corresponding to the target image, and determining a second characteristic matching degree according to the obtained characteristic distance;
6) According to a preset calculation rule, comprehensively fusing the first feature matching degree and the second feature matching degree to obtain the similarity between the face feature data of the acquired face region image and the face feature data of the target image; the method comprises the following steps:
the similarity between the face feature data of the acquired face region image and the face feature data of the target image is obtained by adopting the following formula through comprehensive fusion:
Y=ω 3 ×Y 14 ×Y 2
in the formula, Y represents the similarity between the face feature data of the acquired face region image and the face feature data of the target image, and Y represents the similarity between the face feature data of the acquired face region image and the face feature data of the target image 1 Represents a first feature matching degree, Y 2 Representing a second degree of feature matching, ω 3 And ω 4 Respectively represent the set weight values, where 34 =1;
7) And if the similarity between the face feature data of the acquired face region image and the face feature data of the target image is greater than a set standard threshold, taking the faceID of the face corresponding to the target image as the faceID of the face corresponding to the acquired face region image.
2. The AI-based deep learning big data face recognition system of claim 1, wherein the image extraction module comprises a receiving unit and a video frame extraction unit,
the receiving unit is used for receiving video image data; the video image data carries shooting time information and shooting place information;
the video frame extraction unit is used for extracting video frame images according to the video data.
3. The AI-based deep learning big data face recognition system of claim 1, wherein the association processing module comprises an association unit,
the association unit is used for establishing a HumanID identification for a person corresponding to the faceID when the faceID appears in the video data for the first time, and associating time information TimeInfo and shooting location information SiteInfo corresponding to a video frame image in which the person appears with the corresponding HumanID identification to generate the person-associated spatiotemporal data of the person appearing in the video.
4. The AI-based deep learning big data face recognition system of claim 1, wherein the data management module comprises a storage unit;
the storage unit is used for performing associated storage on the HumanID identification appearing in the video data and corresponding time information TimeInfo and shooting place information SiteInfo to construct an associated database.
5. The AI-based deep learning big data face recognition system of claim 1, further comprising a query module,
the query module is used for querying from the association database according to one or more items of the shooting time information, the shooting place information and the person identification information, and acquiring other associated information associated with the query information.
CN202111595487.9A 2021-12-23 2021-12-23 AI-based deep learning big data face recognition system Active CN114241575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111595487.9A CN114241575B (en) 2021-12-23 2021-12-23 AI-based deep learning big data face recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111595487.9A CN114241575B (en) 2021-12-23 2021-12-23 AI-based deep learning big data face recognition system

Publications (2)

Publication Number Publication Date
CN114241575A CN114241575A (en) 2022-03-25
CN114241575B true CN114241575B (en) 2022-10-25

Family

ID=80762392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111595487.9A Active CN114241575B (en) 2021-12-23 2021-12-23 AI-based deep learning big data face recognition system

Country Status (1)

Country Link
CN (1) CN114241575B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069400A (en) * 2015-07-16 2015-11-18 北京工业大学 Face image gender recognition system based on stack type sparse self-coding
JP2016085579A (en) * 2014-10-24 2016-05-19 大学共同利用機関法人情報・システム研究機構 Image processing apparatus and method for interactive device, and the interactive device
CN106203263A (en) * 2016-06-27 2016-12-07 辽宁工程技术大学 A kind of shape of face sorting technique based on local feature

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109756760B (en) * 2019-01-03 2022-10-04 中国联合网络通信集团有限公司 Video tag generation method and device and server
CN110532432A (en) * 2019-08-21 2019-12-03 深圳供电局有限公司 Character track retrieval method and system and computer readable storage medium
CN112766225A (en) * 2021-02-01 2021-05-07 黄岩 Automatic gait warehouse building device and method based on mobile personnel
CN113269081A (en) * 2021-05-20 2021-08-17 上海仪电数字技术股份有限公司 System and method for automatic personnel identification and video track query

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016085579A (en) * 2014-10-24 2016-05-19 大学共同利用機関法人情報・システム研究機構 Image processing apparatus and method for interactive device, and the interactive device
CN105069400A (en) * 2015-07-16 2015-11-18 北京工业大学 Face image gender recognition system based on stack type sparse self-coding
CN106203263A (en) * 2016-06-27 2016-12-07 辽宁工程技术大学 A kind of shape of face sorting technique based on local feature

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Facial expression recognition using radial encoding of local Gabor features and classifier synthesis;WenfeiGu;《Pattern Recognition》;20120131;全文 *
基于Log-Gabor和ULBP改进算法的人脸识别;李丹丹等;《北京信息科技大学学报(自然科学版)》;20141015(第05期);全文 *

Also Published As

Publication number Publication date
CN114241575A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
Guo et al. Hierarchical method for foreground detection using codebook model
US7869657B2 (en) System and method for comparing images using an edit distance
CN109344731B (en) Lightweight face recognition method based on neural network
CN110717411A (en) Pedestrian re-identification method based on deep layer feature fusion
US6661907B2 (en) Face detection in digital images
GB2579583A (en) Anti-spoofing
CN105975938A (en) Smart community manager service system with dynamic face identification function
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
CN111241932A (en) Automobile exhibition room passenger flow detection and analysis system, method and storage medium
CN111209818A (en) Video individual identification method, system, equipment and readable storage medium
KR20170015639A (en) Personal Identification System And Method By Face Recognition In Digital Image
CN110728216A (en) Unsupervised pedestrian re-identification method based on pedestrian attribute adaptive learning
CN110674677A (en) Multi-mode multi-layer fusion deep neural network for anti-spoofing of human face
CN108334870A (en) The remote monitoring system of AR device data server states
CN110991301A (en) Face recognition method
CN114582011A (en) Pedestrian tracking method based on federal learning and edge calculation
CN112001280B (en) Real-time and online optimized face recognition system and method
CN114241575B (en) AI-based deep learning big data face recognition system
CN112565674A (en) Exhibition hall central control system capable of realizing remote video monitoring and control
Manjula et al. Face detection identification and tracking by PRDIT algorithm using image database for crime investigation
WO2023093241A1 (en) Pedestrian re-identification method and apparatus, and storage medium
CN114187644A (en) Mask face living body detection method based on support vector machine
US11200407B2 (en) Smart badge, and method, system and computer program product for badge detection and compliance
CN112532917A (en) Integrated intelligent monitoring platform based on streaming media
Gupta et al. Design and Analysis of an Expert System for the Detection and Recognition of Criminal Faces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant