CN114241575A - AI-based deep learning big data face recognition system - Google Patents

AI-based deep learning big data face recognition system Download PDF

Info

Publication number
CN114241575A
CN114241575A CN202111595487.9A CN202111595487A CN114241575A CN 114241575 A CN114241575 A CN 114241575A CN 202111595487 A CN202111595487 A CN 202111595487A CN 114241575 A CN114241575 A CN 114241575A
Authority
CN
China
Prior art keywords
face
data
image
video
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111595487.9A
Other languages
Chinese (zh)
Other versions
CN114241575B (en
Inventor
郑飞
胡伟
蔡强
杨凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Intelligent Computing Information Technology Co ltd
Original Assignee
Guangzhou Intelligent Computing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Intelligent Computing Information Technology Co ltd filed Critical Guangzhou Intelligent Computing Information Technology Co ltd
Priority to CN202111595487.9A priority Critical patent/CN114241575B/en
Publication of CN114241575A publication Critical patent/CN114241575A/en
Application granted granted Critical
Publication of CN114241575B publication Critical patent/CN114241575B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a big data face recognition system based on AI deep learning, comprising: the image extraction module is used for extracting video frame images according to the acquired video data; the face recognition module is used for carrying out face recognition processing according to the acquired video frame image, carrying out feature extraction processing on the recognized face to obtain face feature data, and generating faceID face identification information according to the face feature data; the association processing module is used for associating the obtained faceID face identification information, the time information TimeInfo and the shooting place information SiteInfo of the video frame image into the corresponding human ID character identification to generate character association spatiotemporal data; the data management module is used for storing and managing the human-object associated spatiotemporal data. The invention relates the identified face information and the spatio-temporal data, which is beneficial to improving the adaptability and the intelligent level of video image data processing and is suitable for the requirements of different application scenes.

Description

AI-based deep learning big data face recognition system
Technical Field
The invention relates to the technical field of face recognition, in particular to a big data face recognition system based on AI deep learning.
Background
At present, the application of intelligent processing of video images is more and more extensive, and the video image processing technology is also paid more and more attention by researchers. Currently, the video image processing technology can automatically identify information in an image according to obtained video image data, and provides a basis for further intelligent control and intelligent processing.
In the prior art, the technical means of face recognition for video images is generally single in performance and low in adaptability, and cannot meet the requirement of artificial intelligent processing of modern video images.
Disclosure of Invention
In view of the above problems, the present invention aims to provide a big data face recognition system based on deep learning of AI.
The purpose of the invention is realized by adopting the following technical scheme:
the invention provides a big data face recognition system based on AI deep learning, comprising: the system comprises an image extraction module, a face recognition module, an association processing module and a data management module;
the image extraction module is used for extracting video frame images according to the acquired video data;
the face recognition module is used for carrying out face recognition processing according to the acquired video frame image, carrying out feature extraction processing on the recognized face to obtain face feature data, and generating faceID face identification information according to the face feature data;
the association processing module is used for associating the obtained faceID face identification information and the spatio-temporal information of the video frame image into a corresponding human character ID to construct character association spatio-temporal data; the time-space data of the video frame image comprises time information TimeInfo and shooting location information SiteInfo;
the data management module is used for storing and managing the human-object associated spatiotemporal data.
In one embodiment, the image extraction module includes a receiving unit and a video frame extraction unit,
the receiving unit is used for receiving video image data; the video image data carries shooting time information and shooting place information;
the video frame extraction unit is used for extracting video frame images according to the video data.
In one embodiment, the face recognition module comprises a face detection unit and a feature extraction unit
The face detection unit is used for detecting a face part in the video frame image and extracting a face region image;
and the feature extraction unit is used for extracting the face features of the extracted face region image to obtain face feature data and performing faceID face identification according to the face feature data.
In one embodiment, the feature extraction unit includes:
matching and comparing the acquired face feature data with face feature data corresponding to a face contained in a previous frame of video frame image, and identifying the face in the current frame of image according to the faceID of the corresponding face in the previous frame of video image if the face with the face feature data similarity larger than the threshold range is matched; if the face with the face feature data similarity larger than the threshold range cannot be matched, marking a new faceID face identification for the corresponding face according to the currently acquired face feature data;
in one embodiment, the feature extraction unit includes:
and matching the acquired face characteristic data with face characteristic data prestored in a face database, and acquiring the faceID of the matched face to identify the face in the current frame image.
In one embodiment, the association processing module includes an association unit,
the association unit is used for establishing a HumanID identification for a person corresponding to the faceID when the faceID appears in the video data for the first time, and associating time information TimeInfo and shooting location information SiteInfo corresponding to a video frame image where the person appears with the corresponding HumanID identification to generate the spatio-temporal data associated with the person appearing in the video.
In one embodiment, the data management module includes a storage unit;
the storage unit is used for performing associated storage on the HumanID identifier appearing in the video data and the corresponding time information TimeInfo and shooting place information SiteInfo to construct an associated database.
In one embodiment, the system further comprises a query module,
the query module is used for querying from the association database according to one or more items of the shooting time information, the shooting place information and the person identification information, and acquiring other associated information associated with the query information.
The invention has the beneficial effects that:
the invention provides a technical scheme for processing video images, wherein video frame images are extracted from obtained video data, and face identification processing is carried out based on video frame image memorability, when face images appear in a video, the faces are automatically identified, and meanwhile, time information and place information corresponding to the face identifications and the video images are further associated to corresponding person identifications, so that time and place association of persons appearing in the video is facilitated, the obtained associated data is uniformly stored and managed, and one or two of the person data, the time data and the space data can be designated to inquire other data. Lays a foundation for further extended processing (such as behavior recognition, track recognition and the like) according to the associated data. The method is beneficial to improving the adaptability and the intelligent level of video image data processing and is suitable for the requirements of different application scenes.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
Fig. 1 is a frame structure diagram of a big data face recognition system based on AI deep learning according to the present invention.
Detailed Description
The invention is further described in connection with the following application scenarios.
Referring to fig. 1, an embodiment of the invention provides a big data face recognition system based on AI deep learning, which includes: the system comprises an image extraction module, a face recognition module, an association processing module and a data management module;
the image extraction module is used for extracting video frame images according to the acquired video data;
the face recognition module is used for carrying out face recognition processing according to the acquired video frame image, carrying out feature extraction processing on the recognized face to obtain face feature data, and generating faceID face identification information according to the face feature data;
the association processing module is used for associating the obtained faceID face identification information and the spatio-temporal information of the video frame image into a corresponding human character ID to construct character association spatio-temporal data; the time-space data of the video frame image comprises time information TimeInfo and shooting location information SiteInfo;
the data management module is used for storing and managing the human-object associated spatiotemporal data.
The above embodiment of the present invention provides a technical solution for processing a video image, wherein a video frame image is extracted for obtained video data, and based on video frame image memorability face recognition processing, when a face image appears in a video, the face is automatically identified, and meanwhile, time information and location information corresponding to the face identification and the video image are further associated to a corresponding person identification, which is helpful for associating time and location with respect to a person appearing in the video, and performing unified storage management on the obtained associated data, and one or two of the person data, the time data and the space data can be designated to query other data. Lays a foundation for further extended processing (such as behavior recognition, track recognition and the like) according to the associated data. The method is beneficial to improving the adaptability and the intelligent level of video image data processing and is suitable for the requirements of different application scenes.
In one embodiment, the image extraction module includes a receiving unit and a video frame extraction unit,
the receiving unit is used for receiving video image data; the video image data carries shooting time information and shooting place information;
the video frame extraction unit is used for extracting video frame images according to the video data.
In one scenario, for video stream data, frame processing is performed on the received video stream data to obtain each video frame image.
Wherein the video frame extracting unit also performs preprocessing, such as region division processing, on the extracted video frame image.
In a scene, dividing an image by adopting a preset area division rule according to an obtained video frame image, dividing the image into a controlled area and an open area, further detecting and distinguishing the access condition of authorized/unauthorized persons aiming at a video picture of the controlled area, and providing a behavior track record of a target task. And actually recording people stream track data aiming at the open area, playing back the track of a specific person, and recording the interest tag of the person according to the track analysis of the person.
In one embodiment, the face recognition module comprises a face detection unit and a feature extraction unit;
the face detection unit is used for detecting a face part in the video frame image and extracting a face region image;
and the feature extraction unit is used for extracting the face features of the extracted face region image to obtain face feature data and performing faceID face identification according to the face feature data.
In one scenario, a face detection unit detects a face portion in a video frame image based on an AI engine to obtain a face region image. The feature extraction unit extracts feature vectors of the face region based on the neural network model, and performs faceID face identification on the face appearing in the video image according to the obtained feature parameters.
In one embodiment, the feature extraction unit includes:
matching and comparing the acquired face feature data with face feature data corresponding to a face contained in a previous frame of video frame image, and identifying the face in the current frame of image according to the faceID of the corresponding face in the previous frame of video image if the face with the face feature data similarity larger than the threshold range is matched; if the face with the face feature data similarity larger than the threshold range cannot be matched, marking a new faceID face identification for the corresponding face according to the currently acquired face feature data;
in one embodiment, the feature extraction unit includes:
and matching the acquired face characteristic data with face characteristic data prestored in a face database, and acquiring the faceID of the matched face to identify the face in the current frame image.
According to different application scenes, the feature extraction unit can perform faceID face identification on the detected face in different modes, for example, when the obtained video image is an open video image, namely an area acquired in the video image is an open area, and the identity of a person appearing in the video image cannot be judged in advance, the feature extraction unit extracts face feature data according to the detected face area, compares the obtained feature data with the feature data of the face appearing in the previous frame (or multiple frames) of video frame image, and when the similarity of the feature data is higher than a set threshold value, judges that the face is the same person and identifies the face by the same faceID; if the face with similar characteristic data is not detected in the previous frame (or multiple frames) of video frame image, judging that the face enters the video frame for the first time, and marking a new faceID for the face to distinguish the persons appearing in the video. For example, when the video image data is a video image for a preset environment, or a database is constructed for a specific person or a person who has appeared to record related FaceID identification information, the feature extraction unit performs matching comparison between the obtained face feature data of the face region and face feature data prestored in the database, and when the face feature data with similarity higher than a set standard is detected, acquires a corresponding FaceID from the database to identify the face. And if the similarity is not detected to meet the standard, identifying the face by adopting a new faceID.
In one embodiment, in the feature extraction unit,
matching and comparing the acquired face feature data with target face feature data, wherein the target face feature data comprises face feature data of a target image, the target image comprises a previous video frame image or an image prestored in a face database, and the target face feature data comprises face feature data corresponding to a face contained in the previous video frame image or face feature data prestored in the face database; the method specifically comprises the following steps:
1) acquiring a face region image;
2) carrying out size adjustment and definition adjustment on the acquired face region image, and adjusting the processed face region image;
3) carrying out gray processing on the adjusted face region image to obtain a gray characteristic image, comprising the following steps:
converting the adjusted face region image from an RGB color space to a gray scale space, wherein the adopted gray scale feature conversion function is as follows:
Figure BDA0003430384180000051
in the formula, h (x, y) represents the gray value of the pixel (x, y), and r (x, y), g (x, y), and j (x, y) represent R, G, B component values of the pixel (x, y), respectively; maxγ=r,g,j(γ (x, y)) represents the maximum value, min, of the R, G, B component values for pixel point (x, y)γ=r,g,j(γ (x, y)) represents the minimum value, ω, of the R, G, B component values for pixel (x, y)1And ω2Represents a set weight factor, where ω12=1,ω1∈[0.65,0.8],ω2∈[0.2,0.35];hj(x, y) is represented byStandard gray value of the pixel point (x, y), where hj(x,y)=[0.299r(x,y)+0.587g(x,y)+0.114j(x,y)];
Obtaining a gray characteristic image according to the gray value of each pixel point;
4) inputting the gray characteristic image into a CNN model to obtain a first CNN characteristic output by the CNN model;
and calculating the characteristic distance between the first CNN characteristic and a first target CNN characteristic corresponding to the target image, and confirming the first characteristic matching degree according to the obtained characteristic distance.
The CNN model is a CNN model with a 9-layer structure and comprises an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a third convolution layer, a third pooling layer, a first full-connection layer, a second full-connection layer and an output layer which are connected in sequence;
wherein the input of the input layer is a gray characteristic image; the second full-connection layer outputs a feature vector reflecting the features of the input image, the feature vector is output by the output layer, and the distance between the two feature vectors is calculated with the feature vector obtained based on the target image, so that the first feature matching degree is obtained.
5) Performing asymmetric encoding processing based on the grayscale feature image obtained in step S4 to obtain an encoded feature image, including:
performing two-dimensional Gabor wavelet transformation on the gray characteristic image based on the obtained gray characteristic image to obtain a Gabor wavelet image;
acquiring a first coding matrix of each pixel point in the Gabor wavelet image, wherein the first coding matrix acquiring function is as follows:
Figure BDA0003430384180000061
wherein G is1(x, y) represents a first coding matrix of the pixel (x, y), and h (x, y) represents the gray value of the pixel (x, y); wherein the matrix G1(x, y) comprises 1 central point and 8 points, and sequentially comprises 8 neighborhood points numbered as 1,2, …, 8;
based on the obtained first coding matrix, furtherCarrying out neighborhood characteristic coding based on the first coding matrix to obtain a second coding characteristic k2={k2(1),k2(2),…,k2(8)}:
Figure BDA00034303841800000611
In the formula, k2(n) denotes the nth element of the second coding feature, wherein c (n) denotes the first coding matrix G1(x, y) corresponding to the nth neighborhood point correlation feature, where c (n) ═ G1(n)-G1(0)|,G1(n) denotes a first coding matrix G1Gray value corresponding to nth neighborhood point in (x, y), G1(0) Representing a first coding matrix G1(x, y) a gray value of the center point;
Figure BDA00034303841800000612
representing a first coding matrix G1Average value, sigma, of correlation characteristic values of each neighborhood point in (x, y)cFirst coding matrix G1(x, y) the standard deviation of the correlation characteristic value of each neighborhood point;
Figure BDA0003430384180000063
represents a step function when
Figure BDA0003430384180000064
When the temperature of the water is higher than the set temperature,
Figure BDA0003430384180000065
when in use
Figure BDA0003430384180000066
When the temperature of the water is higher than the set temperature,
Figure BDA0003430384180000067
when in use
Figure BDA0003430384180000068
Figure BDA0003430384180000069
When the temperature of the water is higher than the set temperature,
Figure BDA00034303841800000610
based on the second coding property k2Calculating the coding characteristic value of the pixel point (x, y):
Figure BDA0003430384180000062
in the formula, f (x, y) represents the coding characteristic value of the pixel point (x, y);
and forming an encoding characteristic image based on the encoding characteristic values of the pixels.
Inputting the coding characteristic image into a CNN model to obtain a second CNN characteristic output by the CNN model;
and calculating the characteristic distance between the second CNN characteristic and a second target CNN characteristic corresponding to the target image, and confirming the second characteristic matching degree according to the obtained characteristic distance.
6) According to a preset calculation rule, comprehensively fusing the first feature matching degree and the second feature matching degree to obtain the similarity between the face feature data of the acquired face region image and the face feature data of the target image; the method comprises the following steps:
the similarity between the face feature data of the acquired face region image and the face feature data of the target image is obtained by adopting the following formula through comprehensive fusion:
Y=ω3×Y14×Y2
in the formula, Y represents the similarity between the face feature data of the acquired face region image and the face feature data of the target image, and Y represents the similarity between the face feature data of the acquired face region image and the face feature data of the target image1Denotes a first degree of feature matching, Y2Representing a second degree of feature matching, ω3And ω4Respectively represent the set weight values, where34=1。
7) And if the similarity between the face characteristic data of the acquired face region image and the face characteristic data of the target image is greater than a set standard threshold, taking the faceID of the face corresponding to the target image as the faceID of the face corresponding to the acquired face region image.
In the above embodiment, a technical scheme is provided for extracting features of a face region image and performing similarity matching based on extracted face feature data to obtain FaceID representation, where in the process of extracting features of the face region image, local gray scale features and local texture features in the face region image are reflected based on gray scale features and asymmetric coding features, and meanwhile, similarity matching is performed based on two different features and corresponding features of a target image, so as to obtain similarity between a face corresponding to a final face region image and a face corresponding to the target image, complete FaceID identification, and contribute to accuracy of identification of people appearing in a video image.
When the human face image is subjected to graying processing, the method provides a grayscale feature conversion function specially used for graying the human face region image, the feature distance of RGB in different color spaces in a video frame image can be considered in the feature, and a feature distance evaluation part used for reflecting human face features is particularly added, so that the grayscale conversion effect of a shielding part naturally formed in the human face part of a user can be improved, the condition that the human face features of the user are lost in the graying process is avoided, and the human face recognition effect can be improved.
Meanwhile, a technical scheme for further carrying out asymmetric coding on the acquired gray scale feature image is provided, wherein in the technical scheme, firstly, two-dimensional Gabor wavelet transform is carried out based on the gray scale feature image to obtain a Gabor wavelet image, wherein the Gabor wavelet image can effectively reflect texture features in a face region image, meanwhile, a first coding matrix for each pixel point is jointly acquired based on the gray scale features of each pixel point in the image and other pixel points in the neighborhood range of the pixel point, a coding feature feedback function is provided based on the first coding matrix, the technical scheme for further extracting the texture change features reflected by each pixel point in the image based on the obtained first coding matrix is adopted, the texture change features of each pixel point are fed back by adopting the coding feature value obtained based on the first coding matrix, the method has the advantages that the grammatical features are converted into feature values in the range of 0-255 for representation, the coding feature image which can feed back the change of the image textural features is restored, the corresponding CNN features are extracted through the CNN model based on the obtained coding feature image subsequently, the feature extraction and the feature matching are further performed, the extraction of the user face feature data can be completed according to the dimension of the change of the facial features of the face image, the accuracy and the adaptability of feature vectors to the feedback of the face features are improved, and the accuracy and the adaptability of face recognition are improved.
In one embodiment, the association processing module includes an association unit,
the association unit is used for establishing a HumanID identification for a person corresponding to the faceID when the faceID appears in the video data for the first time, and associating time information TimeInfo and shooting location information SiteInfo corresponding to a video frame image where the person appears with the corresponding HumanID identification to generate the spatio-temporal data associated with the person appearing in the video.
In one scene, the time information TimeInfo can be accurate to millisecond level according to the time information corresponding to the video frame image; the shooting location information SiteInfo may be obtained from location information corresponding to a shooting device that shoots the video frame image, and may be a preset correspondence of shooting location information and the shooting device.
In one embodiment, the data management module includes a storage unit;
the storage unit is used for performing associated storage on the HumanID identifier appearing in the video data and the corresponding time information TimeInfo and shooting place information SiteInfo to construct an associated database.
Among the three, the person data (HumanID identifier), the time data (time information TimeInfo), and the space data (place information SiteInfo) may be specified one or two of them, and the other data may be searched.
In one embodiment, the system further comprises a query module,
the query module is used for querying from the association database according to one or more items of the shooting time information, the shooting place information and the person identification information, and acquiring other associated information associated with the query information.
Meanwhile, the face recognition system of the invention can further expand other functions, for example, 1) besides the face recognition being a person, more attributes can be further recognized, such as sex, age, whether to wear a hat, wear glasses, wear clothes, etc.; 2) can be further extended to identify the action of personnel, such as walking, grasping objects, industrial operations, etc. 3) can be further extended to the identification of other objects, such as vehicles, buildings, industrial objects, etc.; and associating the extended and identified content with a HumanID identifier, time information TimeInfo, location information SiteInfo and the like to construct a spatio-temporal data association database suitable for different scenes.
It should be noted that, functional units/modules in the embodiments of the present invention may be integrated into one processing unit/module, or each unit/module may exist alone physically, or two or more units/modules are integrated into one unit/module. The integrated units/modules may be implemented in the form of hardware, or may be implemented in the form of software functional units/modules.
From the above description of embodiments, it is clear for a person skilled in the art that the embodiments described herein can be implemented in hardware, software, firmware, middleware, code or any appropriate combination thereof. For a hardware implementation, a processor may be implemented in one or more of the following units: an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, other electronic units designed to perform the functions described herein, or a combination thereof. For a software implementation, some or all of the procedures of an embodiment may be performed by a computer program instructing associated hardware. In practice, the program may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. Computer-readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be analyzed by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (8)

1. An AI-based deep learning big data face recognition system, comprising: the system comprises an image extraction module, a face recognition module, an association processing module and a data management module;
the image extraction module is used for extracting video frame images according to the acquired video data;
the face recognition module is used for carrying out face recognition processing according to the acquired video frame image, carrying out feature extraction processing on the recognized face to obtain face feature data, and generating faceID face identification information according to the face feature data;
the association processing module is used for associating the obtained faceID face identification information and the spatio-temporal information of the video frame image into a corresponding human character ID to construct character association spatio-temporal data; the time-space data of the video frame image comprises time information TimeInfo and shooting location information SiteInfo;
the data management module is used for storing and managing the human-object associated spatiotemporal data.
2. The AI-based deep learning big data face recognition system of claim 1, wherein the image extraction module comprises a receiving unit and a video frame extraction unit,
the receiving unit is used for receiving video image data; the video image data carries shooting time information and shooting place information;
the video frame extraction unit is used for extracting video frame images according to the video data.
3. The AI-based deep learning big data face recognition system as claimed in claim 1, wherein the face recognition module comprises a face detection unit and a feature extraction unit
The face detection unit is used for detecting a face part in the video frame image and extracting a face region image;
and the feature extraction unit is used for extracting the face features of the extracted face region image to obtain face feature data and performing faceID face identification according to the face feature data.
4. The AI-based deep learning big data face recognition system of claim 1, wherein the feature extraction unit comprises:
matching and comparing the acquired face feature data with face feature data corresponding to a face contained in a previous frame of video frame image, and identifying the face in the current frame of image according to the faceID of the corresponding face in the previous frame of video image if the face with the face feature data similarity larger than the threshold range is matched; and if the face with the face feature data similarity larger than the threshold range cannot be matched, marking a new faceID face identification for the corresponding face according to the currently acquired face feature data.
5. The AI-based deep learning big data face recognition system of claim 1, wherein the feature extraction unit comprises:
and matching the acquired face characteristic data with face characteristic data prestored in a face database, and acquiring the faceID of the matched face to identify the face in the current frame image.
6. The AI-based deep learning big data face recognition system of claim 1, wherein the association processing module comprises an association unit,
the association unit is used for establishing a HumanID identification for a person corresponding to the faceID when the faceID appears in the video data for the first time, and associating time information TimeInfo and shooting location information SiteInfo corresponding to a video frame image where the person appears with the corresponding HumanID identification to generate the spatio-temporal data associated with the person appearing in the video.
7. The AI-based deep learning big data face recognition system of claim 1, wherein the data management module comprises a storage unit;
the storage unit is used for performing associated storage on the HumanID identifier appearing in the video data and the corresponding time information TimeInfo and shooting place information SiteInfo to construct an associated database.
8. The AI-based deep learning big data face recognition system of claim 1, further comprising a query module,
the query module is used for querying from the association database according to one or more items of the shooting time information, the shooting place information and the person identification information, and acquiring other associated information associated with the query information.
CN202111595487.9A 2021-12-23 2021-12-23 AI-based deep learning big data face recognition system Active CN114241575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111595487.9A CN114241575B (en) 2021-12-23 2021-12-23 AI-based deep learning big data face recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111595487.9A CN114241575B (en) 2021-12-23 2021-12-23 AI-based deep learning big data face recognition system

Publications (2)

Publication Number Publication Date
CN114241575A true CN114241575A (en) 2022-03-25
CN114241575B CN114241575B (en) 2022-10-25

Family

ID=80762392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111595487.9A Active CN114241575B (en) 2021-12-23 2021-12-23 AI-based deep learning big data face recognition system

Country Status (1)

Country Link
CN (1) CN114241575B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069400A (en) * 2015-07-16 2015-11-18 北京工业大学 Face image gender recognition system based on stack type sparse self-coding
JP2016085579A (en) * 2014-10-24 2016-05-19 大学共同利用機関法人情報・システム研究機構 Image processing apparatus and method for interactive device, and the interactive device
CN106203263A (en) * 2016-06-27 2016-12-07 辽宁工程技术大学 A kind of shape of face sorting technique based on local feature
CN109756760A (en) * 2019-01-03 2019-05-14 中国联合网络通信集团有限公司 Generation method, device and the server of video tab
CN110532432A (en) * 2019-08-21 2019-12-03 深圳供电局有限公司 A kind of personage's trajectory retrieval method and its system, computer readable storage medium
CN112766225A (en) * 2021-02-01 2021-05-07 黄岩 Automatic gait warehouse building device and method based on mobile personnel
CN113269081A (en) * 2021-05-20 2021-08-17 上海仪电数字技术股份有限公司 System and method for automatic personnel identification and video track query

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016085579A (en) * 2014-10-24 2016-05-19 大学共同利用機関法人情報・システム研究機構 Image processing apparatus and method for interactive device, and the interactive device
CN105069400A (en) * 2015-07-16 2015-11-18 北京工业大学 Face image gender recognition system based on stack type sparse self-coding
CN106203263A (en) * 2016-06-27 2016-12-07 辽宁工程技术大学 A kind of shape of face sorting technique based on local feature
CN109756760A (en) * 2019-01-03 2019-05-14 中国联合网络通信集团有限公司 Generation method, device and the server of video tab
CN110532432A (en) * 2019-08-21 2019-12-03 深圳供电局有限公司 A kind of personage's trajectory retrieval method and its system, computer readable storage medium
CN112766225A (en) * 2021-02-01 2021-05-07 黄岩 Automatic gait warehouse building device and method based on mobile personnel
CN113269081A (en) * 2021-05-20 2021-08-17 上海仪电数字技术股份有限公司 System and method for automatic personnel identification and video track query

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WENFEIGU: "Facial expression recognition using radial encoding of local Gabor features and classifier synthesis", 《PATTERN RECOGNITION》 *
李丹丹等: "基于Log-Gabor和ULBP改进算法的人脸识别", 《北京信息科技大学学报(自然科学版)》 *

Also Published As

Publication number Publication date
CN114241575B (en) 2022-10-25

Similar Documents

Publication Publication Date Title
US20210082136A1 (en) Extracting information from images
CN109558764B (en) Face recognition method and device and computer equipment
US6661907B2 (en) Face detection in digital images
US20170032182A1 (en) System for adaptive real-time facial recognition using fixed video and still cameras
CN110717411A (en) Pedestrian re-identification method based on deep layer feature fusion
US20070286497A1 (en) System and Method for Comparing Images using an Edit Distance
GB2579583A (en) Anti-spoofing
CN111144366A (en) Strange face clustering method based on joint face quality assessment
US20220147735A1 (en) Face-aware person re-identification system
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
KR20170015639A (en) Personal Identification System And Method By Face Recognition In Digital Image
CN111241932A (en) Automobile exhibition room passenger flow detection and analysis system, method and storage medium
CN112906545A (en) Real-time action recognition method and system for multi-person scene
CN110674677A (en) Multi-mode multi-layer fusion deep neural network for anti-spoofing of human face
JP4511135B2 (en) Method for representing data distribution, method for representing data element, descriptor for data element, method for collating or classifying query data element, apparatus set to perform the method, computer program and computer-readable storage medium
CN111291612A (en) Pedestrian re-identification method and device based on multi-person multi-camera tracking
CN114694089A (en) Novel multi-mode fusion pedestrian re-recognition algorithm
CN115731574A (en) Cross-modal pedestrian re-identification method based on parameter sharing and feature learning of intermediate modes
CN108334870A (en) The remote monitoring system of AR device data server states
Mohamed et al. Automated face recogntion system: Multi-input databases
CN114241575B (en) AI-based deep learning big data face recognition system
CN111738059A (en) Non-sensory scene-oriented face recognition method
Manjula et al. Face detection identification and tracking by PRDIT algorithm using image database for crime investigation
Lin Face detection by color and multilayer feedforward neural network
US11200407B2 (en) Smart badge, and method, system and computer program product for badge detection and compliance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant