CN113542527B - Face image transmission method and device, electronic equipment and storage medium - Google Patents

Face image transmission method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113542527B
CN113542527B CN202011349205.2A CN202011349205A CN113542527B CN 113542527 B CN113542527 B CN 113542527B CN 202011349205 A CN202011349205 A CN 202011349205A CN 113542527 B CN113542527 B CN 113542527B
Authority
CN
China
Prior art keywords
face image
face
processing
model
matched
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011349205.2A
Other languages
Chinese (zh)
Other versions
CN113542527A (en
Inventor
杨伟明
郭润增
王少鸣
唐惠忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011349205.2A priority Critical patent/CN113542527B/en
Publication of CN113542527A publication Critical patent/CN113542527A/en
Application granted granted Critical
Publication of CN113542527B publication Critical patent/CN113542527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/44Secrecy systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a face image transmission method, a face image transmission device, electronic equipment and a storage medium, wherein the face image transmission method comprises the following steps: acquiring a face image of a target user acquired by a terminal; performing first encryption processing on the acquired face image to form a first encrypted face image; performing second encryption processing on the first encrypted face image through a face image processing model to form a second encrypted face image; and transmitting the second encrypted face image to a corresponding face detection model to determine a corresponding face classification result through the face detection model, so that the security of face image transmission is ensured, the complexity of data transmission in the face detection process is reduced, the generalization capability and the data processing capability of the face image processing model are stronger, and the method is suitable for different face image detection processing environments.

Description

Face image transmission method and device, electronic equipment and storage medium
Technical Field
The present invention relates to information processing technologies, and in particular, to a face image transmission method, a device, an electronic apparatus, and a storage medium.
Background
The main purpose of human face living body detection is to judge whether the current human face is a real living person or not so as to resist the attack of a dummy human face, and human face image transmission is an important step before human face recognition. In the related art, face data are transmitted by adopting classical asymmetric algorithm encryption according to a certain encryption format, so that the face data are easy to steal, the security of face living detection is affected, and meanwhile, the leaked face data can influence the accuracy of face living detection of a remote check body of a bank system, face payment of an instant messaging client, remote authentication of a taxi driver, a community access control system and the like.
Disclosure of Invention
In view of this, the embodiment of the invention provides a face image transmission method, a device, an electronic device and a storage medium, and the technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides a face image transmission method, which comprises the following steps:
acquiring a face image of a target user acquired by a terminal;
performing first encryption processing on the acquired face image to form a first encrypted face image;
performing second encryption processing on the first encrypted face image through a face image processing model to form a second encrypted face image;
And transmitting the second encrypted face image to a corresponding face detection model so as to determine a corresponding face classification result through the face detection model.
The embodiment of the invention also provides a facial image transmission device, which comprises:
the information transmission module is used for acquiring the face image of the target user acquired by the terminal;
the information processing module is used for carrying out first encryption processing on the acquired face image to form a first encrypted face image;
the information processing module is used for performing second encryption processing on the first encrypted face image through a face image processing model to form a second encrypted face image;
the information transmission module is used for transmitting the second encrypted face image to the corresponding face detection model so as to determine a corresponding face classification result through the face detection model.
In the above-described arrangement, the first and second embodiments,
the information processing module is used for responding to the acquired face image and determining the environment characteristics matched with the face image and a corresponding signature algorithm;
the information processing module is used for triggering a first encryption process and determining acquisition equipment serial numbers, time stamp information, counter information and random character string information matched with the face images through the first encryption process;
The information processing module is used for performing first encryption processing on the collected face image through the first encryption process based on the signature algorithm, the collection equipment serial number, the timestamp information, the counter information and the random character string information to form a first encrypted face image.
In the above-described arrangement, the first and second embodiments,
the information processing module is used for carrying out gray level processing on the first encrypted face image and determining a corresponding face image gray level value;
the information processing module is used for processing the gray value of the face image through a depth residual error network in the face image processing model and determining a first feature vector matched with the gray value of the face image;
the information processing module is used for processing the first feature vector through a convolutional neural network based on an attention mechanism in the face image processing model to form a second feature vector;
the information processing module is used for acquiring a logarithmic probability vector matched with the second characteristic vector;
and the information processing module is used for carrying out second encryption processing on the first encrypted face image based on the logarithmic probability vector to form a second encrypted face image.
In the above-described arrangement, the first and second embodiments,
the information processing module is used for determining the number of blocks of the face image matched with the face image processing model and the predicted value of each block of the face image;
the information processing module is used for determining a logarithmic probability vector matched with the second characteristic vector based on the number of blocks of the face image and the predicted value of each block of the face image.
The embodiment of the invention also provides a device, which further comprises:
the training module is used for acquiring a first training sample set matched with the use environment of the face image processing model, wherein the first training sample comprises a positive example user face image and a negative example user face image;
the training module is used for denoising the first training sample set to form a corresponding second training sample set;
the training module is used for processing the second training sample set through a face image processing model so as to determine initial parameters of a depth residual error network in the face image processing model and initial parameters of a convolutional neural network based on an attention mechanism;
the training module is used for responding to the initial parameters of the depth residual error network and the initial parameters of the convolutional neural network based on the attention mechanism, processing the second training sample set through the face image processing model and determining the update parameters corresponding to different neural networks of the face image processing model;
And the training module is used for respectively carrying out iterative updating on initial parameters of a depth residual error network of the face image processing model and initial parameters of a convolutional neural network based on an attention mechanism through the second training sample set according to update parameters corresponding to different neural networks of the face image processing model so as to realize image type identification on the face image through the face image processing model.
In the above-described arrangement, the first and second embodiments,
the training module is used for carrying out image augmentation processing on the face image;
the training module is used for determining the corresponding face position through a face detection algorithm based on the processing result of image augmentation and intercepting a face image comprising a background image;
the training module is used for processing the face image comprising the background image through the depth processing network of the face image processing model to form a corresponding depth image as any training sample in a training sample set matched with the use environment of the face image processing model.
In the above-described arrangement, the first and second embodiments,
the training module is used for determining a corresponding first depth image based on the face image of the positive example user and determining a corresponding second depth image based on the face image of the negative example user;
The training module is used for determining a third depth image matched with the corresponding recombined feature vector through the depth processing network of the face image processing model when determining the initial parameters of the depth residual network in the face image processing model and the initial parameters of the convolutional neural network based on the attention mechanism;
the training module is used for comparing the first depth image or the second depth image with the third depth image so as to monitor the training accuracy of face images of different positive examples of users and face images of negative examples of users.
In the above-described arrangement, the first and second embodiments,
the training module is used for determining a dynamic noise threshold value matched with the use environment of the face image processing model;
the training module is used for carrying out noise reduction processing on the face image through the convolutional neural network based on the attention mechanism according to the dynamic noise threshold value so as to form a face image matched with the dynamic noise threshold value;
the training module is used for determining a corresponding second training sample set based on the face image matched with the dynamic noise threshold;
the training module is used for determining a static noise threshold value matched with the use environment of the face image processing model;
The training module is used for carrying out noise reduction processing on the face image through the convolutional neural network based on the attention mechanism according to the static noise threshold value so as to form a face image matched with the static noise threshold value;
the training module is used for determining a corresponding second training sample set based on the face image matched with the static noise threshold.
In the above-described arrangement, the first and second embodiments,
the training module is used for determining fast Fourier transform functions matched with different standardization processes through a standardization layer network of the face image processing model;
the training module is used for carrying out standardized processing on the spectrogram energy distribution characteristics of the face image based on a first fast Fourier transform function;
the training module is used for carrying out standardization processing on the high-low frequency distribution characteristics of the face image based on a second fast Fourier transform function;
the training module is used for respectively carrying out standardization processing on the flatness characteristic and the spectrum centroid characteristic of the face image based on a third fast Fourier transform function.
In the above-described arrangement, the first and second embodiments,
the information processing module is used for processing the second encrypted face image through a face detection model to form the face classification probability of the target user;
The information processing module is used for determining a corresponding face classification result based on a probability threshold value of matching the face classification probability of the target user with the face detection model;
and the information processing module is used for outputting the face classification result of the target user.
The embodiment of the invention also provides electronic equipment, which comprises:
a memory for storing executable instructions;
and the processor is used for realizing the preamble face image transmission method when the executable instructions stored in the memory are operated.
The embodiment of the invention also provides a computer readable storage medium which stores executable instructions, wherein the executable instructions realize the preamble face image transmission method when being executed by a processor.
The embodiment of the invention has the following beneficial effects:
the face image of the target user acquired by the terminal is acquired; performing first encryption processing on the acquired face image to form a first encrypted face image; performing second encryption processing on the first encrypted face image through a face image processing model to form a second encrypted face image; and transmitting the second encrypted face image to a corresponding face detection model to determine a corresponding face classification result through the face detection model, so that the security of face image transmission is ensured, the complexity of data transmission in the face detection process is reduced, the generalization capability and the data processing capability of the face image processing model are stronger, and the method is suitable for different face image detection processing environments.
Drawings
FIG. 1 is a schematic view of an application environment of a face image transmission method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a composition structure of an electronic device according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of an alternative face image transmission method according to the present application;
fig. 4 is a schematic diagram of acquiring a face image according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of an alternative face detection model training method according to the present application;
fig. 6 is a schematic diagram of front end display of a face detection method according to the present application;
fig. 7 is a schematic diagram of a use process of the face detection method provided by the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
Before describing embodiments of the present invention in further detail, the terms and terminology involved in the embodiments of the present invention will be described, and the terms and terminology involved in the embodiments of the present invention will be used in the following explanation.
1) Based on the conditions or states that are used to represent the operations that are being performed, one or more of the operations that are being performed may be in real-time or with a set delay when the conditions or states that are being relied upon are satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
2) A client, a carrier in a terminal that implements a specific function, for example, a mobile client (APP) is a carrier of a specific function in a mobile terminal, for example, a function of performing live online (video push) or a play function of online video.
3) Convolutional neural network (CNN Convolutional Neural Networks) is a type of feedforward neural network (Feed forward Neural Networks) that includes convolutional computation and has a deep structure, and is one of representative algorithms of deep learning. Convolutional neural networks have the capability of token learning (representation learning) and are capable of performing a shift-invariant classification (shift-invariant classification) on input information in their hierarchical structure.
4) Model training, multi-classification learning is carried out on the image data set. The model can be constructed by adopting deep learning frames such as Tensor Flow and torch, and a multi-face detection model is formed by combining multiple layers of neural network layers such as CNN. The input of the model is a three-channel or original channel matrix formed by reading an image through tools such as openCV, the model is output as multi-classification probability, and the webpage category is finally output through algorithms such as softmax. During training, the model approaches to the correct trend through an objective function such as cross entropy and the like.
5) Neural Networks (NN): an artificial neural network (Artificial Neural Network, ANN), abbreviated as neural network or neural-like network, is a mathematical or computational model that mimics the structure and function of biological neural networks (the central nervous system of animals, particularly the brain) for estimating or approximating functions in the field of machine learning and cognitive sciences.
6) Softmax: the normalized exponential function is a generalization of the logic function. It can "compress" a K-dimensional vector containing arbitrary real numbers into another K-dimensional real vector such that each element ranges between 0,1 and the sum of all elements is 1.
In the following, description will be given of a face image transmission method provided by the embodiment of the present invention, where fig. 1 is a schematic view of an application environment of the face image transmission method in the embodiment of the present invention, referring to fig. 1, a terminal (including a terminal 10-1 and a terminal 10-2) is provided with a client of application software of a face living body detection function, and a trained face detection model and a face image detection model are deployed in a server to implement face data transmission and to test a face on one side of the terminal based on the face data, for example, in a financial field payment field, operations that a user needs to perform identity verification such as transferring, paying or modifying account information through a smart phone may be implemented by detecting a face living body of the user. In the process, a terminal device forms a second encrypted face image through a face image processing model, and uploads the face image or video to be detected to a server, or the server directly invokes the face image or the face video to be detected in a database, and then the trained face detection model is adopted to verify the face image or the face video so as to obtain a detection result. The server may feed back the detection result to the terminal device, or may keep the detection result local for other service applications or processes, where the terminal is connected to the server 200 through the network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two, and uses a wireless link to implement data transmission.
As an example, the terminal 10-1 may be configured to lay out a face image processing model to implement the face image transmission method provided by the present invention, so as to obtain a face image of a target user acquired by the terminal; performing first encryption processing on the acquired face image to form a first encrypted face image; performing second encryption processing on the first encrypted face image through a face image processing model to form a second encrypted face image; and transmitting the second encrypted face image to a corresponding face detection model so as to determine a corresponding face classification result through the face detection model.
Of course, before different face images are processed through the face detection model to generate corresponding classification results, the face image processing model needs to be trained, which specifically includes:
acquiring a first training sample set matched with the use environment of the face image processing model, wherein the first training sample comprises a positive example user face image and a negative example user face image; denoising the first training sample set to form a corresponding second training sample set; processing the second training sample set through a face image processing model to determine initial parameters of a depth residual error network and initial parameters of a convolutional neural network based on an attention mechanism in the face image processing model; responding to the initial parameters of the depth residual error network and the initial parameters of the convolutional neural network based on the attention mechanism, processing the second training sample set through the face image processing model, and determining the update parameters corresponding to different neural networks of the face image processing model; according to the updating parameters corresponding to different neural networks of the face image processing model, the initial parameters of the depth residual error network of the face image processing model and the initial parameters of the convolutional neural network based on the attention mechanism are respectively and iteratively updated through the second training sample set, so that the face image is subjected to image type identification through the face image processing model.
Of course, the face image transmission device provided by the application can be applied to virtual resources or entity resources for performing financial activities or using environments for performing information interaction through entity financial resource payment environments (including but not limited to face detection environments in entity financial resource payments of various types) or social software, financial information of different data sources is usually processed in the process of performing financial activities on various types of entity financial resources or through virtual resource payments, and finally, detection results matched with corresponding target users are presented on a User Interface (UI) to determine whether the detected images are living face images or attack information of the users. The face classification result (for example, judging that the attack information is currently detected) obtained by the user in the current display interface can be also called by other application programs.
The face image transmission method provided by the embodiment of the application is realized based on artificial intelligence, wherein the artificial intelligence (Artificial Intelligence, AI) is a theory, a method, a technology and an application system which simulate, extend and expand the intelligence of a person by using a digital computer or a machine controlled by the digital computer, sense the environment, acquire knowledge and acquire an optimal result by using the knowledge. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
In the embodiment of the application, the mainly related artificial intelligence software technology comprises the voice processing technology, machine learning and other directions. For example, speech recognition techniques (Automatic Speech Recognition, ASR) in Speech technology (Speech Technology) may be involved, including Speech signal preprocessing (Spee ch signal preprocessing), speech signal frequency domain analysis (Speech signal frequency analyzin g), speech signal feature extraction (Speech signal feature extraction), speech signal feature matching/recognition (Speech signal feature matching/recognition), training of Speech (Speech training), and the like.
For example, machine Learning (ML) may be involved, which is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, and algorithm complexity theory. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine Learning typically includes Deep Learning (Deep Learning) techniques, including artificial neural networks (artifici al neural network), such as convolutional neural networks (Convolutional Neural Network, CNN), recurrent neural networks (Recurrent Neural Network, RNN), deep neural networks (Deep neural network, DNN), and the like.
The following describes the structure of the facial image transmission device according to the embodiment of the present invention in detail, and the facial image transmission device may be implemented in various forms, such as a dedicated terminal with a processing function of the facial image transmission device, or a server provided with a processing function of the facial image transmission device, for example, the server 200 in fig. 1. Fig. 2 is a schematic diagram of a composition structure of an electronic device according to an embodiment of the present invention, and it is understood that fig. 2 only shows an exemplary structure of a face image transmission apparatus, but not all the structures, and a part of or all the structures shown in fig. 2 may be implemented as required.
The facial image transmission device provided by the embodiment of the invention comprises: at least one processor 201, a memory 202, a user interface 203, and at least one network interface 204. The various components in the facial image transmission apparatus are coupled together by a bus system 205. It is understood that the bus system 205 is used to enable connected communications between these components. The bus system 205 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled as bus system 205 in fig. 2.
The user interface 203 may include, among other things, a display, keyboard, mouse, trackball, click wheel, keys, buttons, touch pad, or touch screen, etc.
It will be appreciated that the memory 202 may be either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The memory 202 in embodiments of the present invention is capable of storing data to support operation of the terminal (e.g., 10-1). Examples of such data include: any computer program, such as an operating system and application programs, for operation on the terminal (e.g., 10-1). The operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application may comprise various applications.
In some embodiments, the face image transmission device provided by the embodiment of the present invention may be implemented by combining software and hardware, and as an example, the face image transmission device provided by the embodiment of the present invention may be a processor in the form of a hardware decoding processor, which is programmed to execute the face image transmission method provided by the embodiment of the present invention. For example, a processor in the form of a hardware decoding processor may employ one or more application specific integrated circuits (ASICs, application Specific Integrated Circuit), DSPs, programmable logic devices (PLDs, programmable Logic Device), complex programmable logic devices (CPLDs, complex Programmable Logic Device), field programmable gate arrays (FPGAs, field-Pro grammable Gate Array), or other electronic components.
As an example of implementation of the facial image transmission device provided by the embodiment of the present invention by combining software and hardware, the facial image transmission device provided by the embodiment of the present invention may be directly embodied as a combination of software modules executed by the processor 201, the software modules may be located in a storage medium, the storage medium is located in the memory 202, the processor 201 reads executable instructions included in the software modules in the memory 202, and the facial image transmission method provided by the embodiment of the present invention is completed by combining necessary hardware (including, for example, the processor 201 and other components connected to the bus 205).
By way of example, the processor 201 may be an integrated circuit chip having signal processing capabilities such as a general purpose processor, such as a microprocessor or any conventional processor, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
As an example of implementation of the facial image transmission apparatus provided by the embodiment of the present invention by hardware, the apparatus provided by the embodiment of the present invention may be implemented directly by the processor 201 in the form of a hardware decoding processor, for example, by one or more application specific integrated circuits (ASIC, application Specific Inte grated Circuit), DSPs, programmable logic devices (PLD, programmable Logic Device), complex programmable logic devices (CPLD, complex Programmable Logic Device), field programmable gate arrays (FPGA, field-Programmable Gate Array), or other electronic components to implement the facial image transmission method provided by the embodiment of the present invention.
The memory 202 in the embodiment of the present invention is used to store various types of data to support the operation of the face image transmission apparatus. Examples of such data include: any executable instructions, such as executable instructions, for operation on a face image transmission device, a program implementing the method of transmitting a slave face image of an embodiment of the present invention may be included in the executable instructions.
In other embodiments, the face image transmission device provided in the embodiments of the present invention may be implemented in a software manner, and fig. 2 shows the face image transmission device stored in the memory 202, which may be software in the form of a program, a plug-in, and a series of modules, and as an example of the program stored in the memory 202, may include the face image transmission device, where the face image transmission device includes the following software modules: an information transmission module 2081 and an information processing module 2082. When the software modules in the face image transmission device are read by the processor 201 into the RAM and executed, the face image transmission method provided by the embodiment of the invention is implemented, where the functions of each software module in the face image transmission device include:
the information transmission module 2081 is configured to acquire a face image of a target user acquired by the terminal.
The information processing module 2082 is configured to perform a first encryption process on the collected face image to form a first encrypted face image.
The information processing module 2082 is configured to perform a second encryption process on the first encrypted face image through a face image processing model to form a second encrypted face image.
The information transmission module 2081 is configured to transmit the second encrypted face image to a corresponding face detection model, so as to determine a corresponding face classification result through the face detection model.
Before introducing the face image transmission method provided by the application, firstly, a face detection process in a face detection scene in financial payment in the related technology is described, wherein a face detection model can determine a corresponding face classification result by acquiring face image data, and whether a real living face image or attack information is currently detected is judged, wherein the attack information can be planar face image attack or three-dimensional face image attack. In the related art, face data are all transmitted by adopting classical asymmetric algorithm encryption according to a certain encryption format, so that the face data are easy to steal, an attacker can form a three-dimensional face image according to the stolen face image, and deception is carried out on the face detection process so as to steal funds of a user.
In order to solve the above-mentioned drawbacks, referring to fig. 3, fig. 3 is an optional flowchart of the face image transmission method provided by the present application, it may be understood that the steps shown in fig. 3 may be executed by various electronic devices running the face image transmission apparatus, for example, may be a dedicated terminal with a living face image detection function, a server with a face image transmission function, or a server cluster, so as to implement training and deployment for face detection environments adapted in different financial scenarios. The following is a description of the steps shown in fig. 3.
Step 301: the face image transmission device acquires face images of target users acquired by the terminal.
Referring to fig. 4, fig. 4 is a schematic diagram of acquiring a face image in an embodiment of the present application; after obtaining the user image collected by the terminal, firstly framing the area of the user face by the face detection technology, expanding 1.8 times by taking the area as the center to obtain more background content, and cutting the face image comprising the background content; for example: the following means may be employed: adopting a face detection algorithm to frame and select the face position of the target object; marking out feature points of human eyes, mouths, noses, faces and the like by using a five-sense organ positioning algorithm; and intercepting a face image comprising background content according to the detected face position. Then, the real human face cut out through the depth estimation network is calculated to obtain a depth map corresponding to the face, wherein in the embodiment of the application, the real human picture can be assumed to have the depth map, and the depth map corresponding to the attack picture is a black base map. Classification deep learning network techniques based on image information, including but not limited to LeNet, alexNet, VGG, acceptance series networks, resNet and DenseNet; and extracting traditional features on the image or in the ROI, including but not limited to gray-scale-based features such as mean value and variance and features based on distribution histograms, features based on correlation matrixes such as GLCM and GLRLM or signal features after Fourier transformation of the image and the like.
Step 302: the face image transmission device performs first encryption processing on the acquired face image to form a first encrypted face image.
In some embodiments of the present invention, the first encrypting process is performed on the collected face image to form a first encrypted face image, which may be implemented by the following ways:
determining environmental features matched with the face image and a corresponding signature algorithm in response to the acquired face image; triggering a first encryption process, and determining acquisition equipment serial numbers, time stamp information, counter information and random character string information matched with the face images through the first encryption process; and performing first encryption processing on the acquired face image based on the signature algorithm, the acquisition equipment serial number, the timestamp information, the counter information and the random character string information through the first encryption process to form a first encrypted face image. The first encryption process performs a first encryption process on the collected face image according to a signature algorithm, the collection device serial number, the timestamp information, the counter information and the random string information, and specifically, the first encryption process may be implemented by a concatenation rule "{ mac_num } { device_info } { sign_version } { timestamp } { counter } { random }": magic_num: for different signature original string formats, the method can be adjusted and adapted according to different face detection environments; device_info: the equipment serial number is collected, and the source and sign_version of the face image can be represented: signature version, timestamp: timestamp, counter: counter, random: the random character string can realize the primary encryption of the transmitted face image through the first encryption processing.
Step 303: and the face image transmission device performs second encryption processing on the first encrypted face image through a face image processing model to form a second encrypted face image.
In some embodiments of the present invention, performing a second encryption process on the first encrypted face image through a face image processing model to form a second encrypted face image, including:
carrying out gray level processing on the first encrypted face image, and determining a corresponding face image gray level value; processing the gray value of the face image through a depth residual error network in a face image processing model, and determining a first feature vector matched with the gray value of the face image; processing the first feature vector through a convolutional neural network based on an attention mechanism in a face image processing model to form a second feature vector; obtaining a logarithmic probability vector matched with the second feature vector; and carrying out second encryption processing on the first encrypted face image based on the logarithmic probability vector to form a second encrypted face image. The corresponding encryption security factor, { mac_num } { device_info } { sign_version } { timestamp } { counter } { random } { cnn security factor }, wherein the cnn security factor is the logits output by the softmax layer, and specifically, the number of blocks of the face image matched with the face image processing model and the predicted value of each block of the face image can be determined; determining a logarithmic probability vector matched with the second feature vector based on the number of blocks of the face image and a predicted value of each block of the face image, further characterizing a value of the logarithmic probability vector logits by N x M, wherein N is the number of blocks into which the face image input by the model is divided, M is a predicted value of each block, wherein the value of M is 0 or 1, and thus, the transmitted face image can be encrypted through a face image processing model through "{ digital_num } { device_info } { sign_version } { time stamp } { counter } { cnn_security factor } { random } { payload }".
Step 304: and the face image transmission device transmits the second encrypted face image to the corresponding face detection model.
Therefore, the corresponding face classification result can be determined through the face detection model.
Step 305: and the server processes the second encrypted face image through a face detection model to form the face classification probability of the target user.
Step 306: the server determines a corresponding face classification result based on a probability threshold value of matching the face classification probability of the target user with the face detection model, and outputs the face classification result of the target user.
With continued reference to fig. 5, fig. 5 is an optional flowchart of the face detection model training method provided by the present application, and it may be understood that the steps shown in fig. 5 may be performed by various electronic devices running the face detection model training apparatus, for example, may be a dedicated terminal with a living face image detection function, a server with a face detection model training function, or a server cluster, so as to implement training and deployment of face detection models adapted in different financial payment face detection scenarios. The following is a description of the steps shown in fig. 7.
Step 501: and acquiring a first training sample set matched with the use environment of the face image processing model.
The first training sample comprises a positive example user face image and a negative example user face image.
Wherein, the face image can be subjected to image augmentation processing; based on the processing result of image augmentation, determining the corresponding face position through a face detection algorithm, and intercepting a face image comprising a background image; and processing the face image comprising the background image through a depth processing network of the face image processing model to form a corresponding depth map as any training sample in a training sample set matched with the use environment of the face image processing model.
Step 502: denoising the first training sample set to form a corresponding second training sample set.
In some embodiments of the present invention, a corresponding first depth image may be determined based on a positive example user face image, and a corresponding second depth image may be determined based on a negative example user face image; when initial parameters of a depth residual network in the face image processing model and initial parameters of a convolutional neural network based on an attention mechanism are determined, determining a third depth image matched with a corresponding recombined feature vector through the depth processing network of the face image processing model; and comparing the first depth image or the second depth image with the third depth image to monitor the accuracy of training face images of different positive examples and negative examples of users.
In some embodiments of the present invention, different financial service scenarios corresponding to the face detection models matched with the target users may also be determined; and denoising the face image features matched with the target user based on the different financial service scenes to form a face image feature set matched with the corresponding financial service scene, wherein each sample comprises corresponding domain related features and domain independent features. Wherein, because the usage environments of the face detection models are different, the dynamic noise threshold value matched with the usage environments of the face detection models is also different, for example, in the financial usage environments of payment and transfer through instant messaging client processes, the dynamic noise threshold value matched with the usage environments of the face detection models needs to be smaller than the dynamic noise threshold value in the financial usage environments of users for financial account opening through instant messaging client processes.
Further, when the face detection model is solidified in a corresponding hardware mechanism, such as a financial terminal (POS machine or teller machine), and the use environment is used for carrying out financial lending on face detection of a target user in the financial lending use environment, because the noise is single, the training speed of the face detection model can be effectively improved through the fixed noise threshold corresponding to the fixed face detection model, the waiting time of the user is reduced, and the large-scale deployment of the face detection model is facilitated.
Meanwhile, in practical application, the scheme of the application can be realized through the APP with the face detection function, and meanwhile, the scheme of the application can be realized through the instant messaging client applet, and other financial programs can be used for adjusting the face detection model results, so that when a user changes a terminal, the user can quickly detect faces of different objects through the face detection model deployed in the cloud server network through the financial cloud server network.
Specifically, the target user identifier, the model parameters of the face detection model and the financial scene identifier can be sent to a network of the cloud server, and the target user identifier, the model parameters of the face detection model and the financial scene identifier are acquired through the cloud server network when the corresponding face detection process is triggered so as to be used by the corresponding face detection process.
The embodiment of the application can be realized by combining Cloud technology, wherein Cloud technology (Cloud technology) refers to a hosting technology for integrating hardware, software, network and other series resources in a wide area network or a local area network to realize calculation, storage, processing and sharing of data, and can also be understood as the general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied based on a Cloud computing business model. Background services of technical network systems require a large amount of computing and storage resources, such as video websites, picture websites and more portal websites, so cloud technologies need to be supported by cloud computing.
It should be noted that cloud computing is a computing mode, which distributes computing tasks on a resource pool formed by a large number of computers, so that various application systems can acquire computing power, storage space and information service as required. The network that provides the resources is referred to as the "cloud". Resources in the cloud are infinitely expandable in the sense of users, and can be acquired at any time, used as needed, expanded at any time and paid for use as needed. As a basic capability provider of cloud computing, a cloud computing resource pool platform, referred to as a cloud platform for short, is generally called infrastructure as a service (IaaS, infrastructure as a Service), and multiple types of virtual resources are deployed in the resource pool for external clients to select for use. The cloud computing resource pool mainly comprises: computing devices (which may be virtualized machines, including operating systems), storage devices, and network devices.
Step 503: and processing the second training sample set through a face image processing model to determine initial parameters of a depth residual error network and initial parameters of a convolutional neural network based on an attention mechanism in the face image processing model.
Step 504: and responding to the initial parameters of the depth residual error network and the initial parameters of the convolutional neural network based on the attention mechanism, processing the second training sample set through the face image processing model, and determining the update parameters corresponding to different neural networks of the face image processing model.
Step 505: and respectively carrying out iterative updating on initial parameters of a depth residual error network of the face image processing model and initial parameters of a convolutional neural network based on an attention mechanism through the second training sample set according to updating parameters corresponding to different neural networks of the face image processing model.
Therefore, the image type recognition of the face image can be realized through the face image processing model.
In some embodiments of the invention, the method further comprises:
determining a fast Fourier transform function matched with different standardization treatments through a standardization layer network of the face image processing model; based on a first fast Fourier transform function, carrying out standardization processing on spectrogram energy distribution characteristics of the face image; based on a second fast Fourier transform function, carrying out standardization processing on the high-low frequency distribution characteristics of the face image; and respectively carrying out standardization processing on the flatness characteristic and the spectrum centroid characteristic of the face image based on a third fast Fourier transform function. Wherein, the energy distribution characteristics of the image spectrogram are extracted with the fft size of 4096, the standardization processing is performed, the high-low frequency distribution characteristics can be extracted with the fft size of 2048, the standardization processing is performed with the fft size of 1024, the flat characteristics can be extracted, the standardization processing is performed with the fft size of 1024, the spectrum centroid characteristics can be extracted, the standardization processing is performed,
Taking face detection of a target user who needs to carry out financial loan in a financial loan usage scenario as an example, the face detection model training method and the face detection method provided by the application are described, wherein, referring to fig. 6, fig. 6 is a schematic diagram of front end display of the face detection method provided by the application, wherein, a terminal (for example, the terminal 10-1 and the terminal 10-2 in fig. 1) is provided with a client capable of displaying software for carrying out financial loan correspondingly, for example, a virtual resource or an entity resource carries out financial activity or carries out a client or plug-in through the virtual resource, and the user can obtain loan (for example, financial payment of an instant messaging client or a process of buying money loan to a financial institution or a platform through the corresponding client); the terminal is connected to the server 200 through the network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two, and uses a wireless link to implement data transmission. A server (e.g., server 200 in fig. 1) is a server of an enterprise that provides financial services such as payment, loan, financing, etc. for banks, securities, mutual funds, etc. When a user who needs to transact related financial business uses client equipment to access services provided by a client server of an enterprise, the client server triggers a face detection applet in an instant messaging client of a user terminal to realize real-time detection of face information and prevent attack information from stealing the face information of the user through a three-dimensional face image.
Referring to fig. 7, fig. 7 is a schematic diagram of a use process of the face detection method provided by the present application, where the face detection method provided by the present application includes the following steps:
step 701: the server acquires a face living body detection request.
Step 702: and responding to the face living body detection request, and collecting the face image of the target user by the terminal.
Step 703: and acquiring model parameters of the face detection model and threshold information corresponding to the financial scene identification from the financial cloud server network through the target user identification.
For different use scenes of the face detection model, a matched threshold value can be set, any one input face picture can be input into a classification network for prediction, a prediction score (the size interval is 0, 1) is obtained, the prediction score is compared with the set threshold value, if the prediction score is smaller than the threshold value, the picture is judged to be an attack picture, prompt information is sent out, and otherwise, the picture is a true picture.
Step 704: triggering a deployed face image processing model in the instant messaging client applet, encrypting the face image through the face image processing model, and transmitting the face image to the instant messaging client applet server.
The face image is encrypted through the face image processing model to form a first encrypted face image and a second encrypted face image, so that the safety of face image transmission can be effectively ensured, and the transmitted encrypted face image can be analyzed and processed only by the instant messaging client applet server.
Step 705: triggering a face detection model deployed in an instant messaging client applet server, and processing the face detection model to form the face classification probability of the target user.
Step 706: and determining a corresponding face classification result based on a probability threshold value of matching the face classification probability of the target user with the face detection model.
Step 707: and outputting a face classification result of the target user, and determining whether the face detection is passed.
The beneficial effects are that:
the face image of the target user acquired by the terminal is acquired; performing first encryption processing on the acquired face image to form a first encrypted face image; performing second encryption processing on the first encrypted face image through a face image processing model to form a second encrypted face image; and transmitting the second encrypted face image to a corresponding face detection model to determine a corresponding face classification result through the face detection model, so that the security of face image transmission is ensured, the complexity of data transmission in the face detection process is reduced, the generalization capability and the data processing capability of the face image processing model are stronger, and the method is suitable for different face image detection processing environments.
The foregoing description of the embodiments of the invention is not intended to limit the scope of the invention, but is intended to cover any modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (13)

1. A face image transmission method, the method comprising:
acquiring a face image of a target user acquired by a terminal;
performing first encryption processing on the acquired face image to form a first encrypted face image;
carrying out gray level processing on the first encrypted face image, and determining a corresponding face image gray level value;
processing the gray value of the face image through a depth residual error network in a face image processing model, and determining a first feature vector matched with the gray value of the face image;
processing the first feature vector through a convolutional neural network based on an attention mechanism in a face image processing model to form a second feature vector;
obtaining a logarithmic probability vector matched with the second feature vector;
performing second encryption processing on the first encrypted face image based on the logarithmic probability vector to form a second encrypted face image;
And transmitting the second encrypted face image to a corresponding face detection model so as to determine a corresponding face classification result through the face detection model.
2. The method of claim 1, wherein the performing a first encryption process on the acquired face image to form a first encrypted face image comprises:
determining environmental features matched with the face image and a corresponding signature algorithm in response to the acquired face image;
triggering a first encryption process, and determining acquisition equipment serial numbers, time stamp information, counter information and random character string information matched with the face images through the first encryption process;
and performing first encryption processing on the acquired face image based on the signature algorithm, the acquisition equipment serial number, the timestamp information, the counter information and the random character string information through the first encryption process to form a first encrypted face image.
3. The method of claim 1, wherein the obtaining a log probability vector that matches the second feature vector comprises:
determining the number of blocks of the face image matched with the face image processing model and the predicted value of each block of the face image;
And determining a logarithmic probability vector matched with the second feature vector based on the number of blocks of the face image and the predicted value of each block of the face image.
4. The method according to claim 1, wherein the method further comprises:
acquiring a first training sample set matched with the use environment of the face image processing model, wherein the first training sample comprises a positive example user face image and a negative example user face image;
denoising the first training sample set to form a corresponding second training sample set;
processing the second training sample set through a face image processing model to determine initial parameters of a depth residual error network and initial parameters of a convolutional neural network based on an attention mechanism in the face image processing model;
responding to the initial parameters of the depth residual error network and the initial parameters of the convolutional neural network based on the attention mechanism, processing the second training sample set through the face image processing model, and determining the update parameters corresponding to different neural networks of the face image processing model;
according to the updating parameters corresponding to different neural networks of the face image processing model, the initial parameters of the depth residual error network of the face image processing model and the initial parameters of the convolutional neural network based on the attention mechanism are respectively and iteratively updated through the second training sample set, so that the face image is subjected to image type identification through the face image processing model.
5. The method of claim 4, wherein the acquiring a first set of training samples that match the environment of use of the facial image processing model comprises:
performing image augmentation processing on the face image;
based on the processing result of image augmentation, determining the corresponding face position through a face detection algorithm, and intercepting a face image comprising a background image;
and processing the face image comprising the background image through a depth processing network of the face image processing model to form a corresponding depth map as any training sample in a training sample set matched with the use environment of the face image processing model.
6. The method according to claim 4, wherein the method further comprises:
determining a corresponding first depth image based on the positive example user face image, and determining a corresponding second depth image based on the negative example user face image;
when initial parameters of a depth residual network in the face image processing model and initial parameters of a convolutional neural network based on an attention mechanism are determined, determining a third depth image matched with a corresponding recombined feature vector through the depth processing network of the face image processing model;
And comparing the first depth image or the second depth image with the third depth image to monitor the accuracy of training face images of different positive examples and negative examples of users.
7. The method of claim 4, wherein denoising the first set of training samples to form a corresponding second set of training samples comprises:
determining a dynamic noise threshold matched with the use environment of the face image processing model;
carrying out noise reduction processing on the face image through the convolutional neural network based on the attention mechanism according to the dynamic noise threshold value so as to form a face image matched with the dynamic noise threshold value;
determining a corresponding second training sample set based on the face image matched with the dynamic noise threshold; or,
determining a static noise threshold matched with the use environment of the face image processing model;
performing noise reduction processing on the face image through the convolutional neural network based on the attention mechanism according to the static noise threshold value to form a face image matched with the static noise threshold value;
a corresponding second set of training samples is determined based on the face image matching the static noise threshold.
8. The method according to claim 1, wherein the method further comprises:
processing the second encrypted face image through a face detection model to form a face classification probability of the target user;
determining a corresponding face classification result based on a probability threshold value of matching the face classification probability of the target user with a face detection model;
and outputting the face classification result of the target user.
9. A facial image transmission apparatus, the apparatus comprising:
the information transmission module is used for acquiring the face image of the target user acquired by the terminal;
the information processing module is used for carrying out first encryption processing on the acquired face image to form a first encrypted face image;
the information processing module is used for carrying out gray level processing on the first encrypted face image and determining a corresponding face image gray level value;
processing the gray value of the face image through a depth residual error network in a face image processing model, and determining a first feature vector matched with the gray value of the face image;
processing the first feature vector through a convolutional neural network based on an attention mechanism in a face image processing model to form a second feature vector;
Obtaining a logarithmic probability vector matched with the second feature vector;
performing second encryption processing on the first encrypted face image based on the logarithmic probability vector to form a second encrypted face image;
the information transmission module is used for transmitting the second encrypted face image to the corresponding face detection model so as to determine a corresponding face classification result through the face detection model.
10. The apparatus of claim 9, wherein the device comprises a plurality of sensors,
the information processing module is used for responding to the acquired face image and determining the environment characteristics matched with the face image and a corresponding signature algorithm;
the information processing module is used for triggering a first encryption process and determining acquisition equipment serial numbers, time stamp information, counter information and random character string information matched with the face images through the first encryption process;
the information processing module is used for performing first encryption processing on the collected face image through the first encryption process based on the signature algorithm, the collection equipment serial number, the timestamp information, the counter information and the random character string information to form a first encrypted face image.
11. The apparatus of claim 9, wherein the apparatus further comprises:
the training module is used for acquiring a first training sample set matched with the use environment of the face image processing model, wherein the first training sample comprises a positive example user face image and a negative example user face image;
the training module is used for denoising the first training sample set to form a corresponding second training sample set;
the training module is used for processing the second training sample set through a face image processing model so as to determine initial parameters of a depth residual error network in the face image processing model and initial parameters of a convolutional neural network based on an attention mechanism;
the training module is used for responding to the initial parameters of the depth residual error network and the initial parameters of the convolutional neural network based on the attention mechanism, processing the second training sample set through the face image processing model, and determining the update parameters corresponding to different neural networks of the face image processing model;
the training module is configured to iteratively update initial parameters of a depth residual error network of the face image processing model and initial parameters of a convolutional neural network based on an attention mechanism respectively through the second training sample set according to update parameters corresponding to different neural networks of the face image processing model, so as to implement image type recognition on the face image through the face image processing model.
12. An electronic device, the electronic device comprising:
a memory for storing executable instructions;
a processor for implementing the face image transmission method of any one of claims 1 to 7 when executing the executable instructions stored in the memory.
13. A computer readable storage medium storing executable instructions which when executed by a processor implement the face image transmission method of any one of claims 1 to 7.
CN202011349205.2A 2020-11-26 2020-11-26 Face image transmission method and device, electronic equipment and storage medium Active CN113542527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011349205.2A CN113542527B (en) 2020-11-26 2020-11-26 Face image transmission method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011349205.2A CN113542527B (en) 2020-11-26 2020-11-26 Face image transmission method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113542527A CN113542527A (en) 2021-10-22
CN113542527B true CN113542527B (en) 2023-08-18

Family

ID=78094293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011349205.2A Active CN113542527B (en) 2020-11-26 2020-11-26 Face image transmission method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113542527B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117650945B (en) * 2024-01-29 2024-04-05 南通云链通信息科技有限公司 Self-media data security operation management method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107666386A (en) * 2016-07-27 2018-02-06 复凌科技(上海)有限公司 A kind of data safe transmission method and device
CN107833032A (en) * 2017-10-26 2018-03-23 胡祥义 It is a kind of based on mobile phone without card Bank Account Number implementation method
CN111314306A (en) * 2020-01-17 2020-06-19 网易(杭州)网络有限公司 Interface access method and device, electronic equipment and storage medium
WO2020125623A1 (en) * 2018-12-20 2020-06-25 上海瑾盛通信科技有限公司 Method and device for live body detection, storage medium, and electronic device
CN111400676A (en) * 2020-02-28 2020-07-10 平安国际智慧城市科技股份有限公司 Service data processing method, device, equipment and medium based on sharing authority
CN111597884A (en) * 2020-04-03 2020-08-28 平安科技(深圳)有限公司 Facial action unit identification method and device, electronic equipment and storage medium
CN111680672A (en) * 2020-08-14 2020-09-18 腾讯科技(深圳)有限公司 Face living body detection method, system, device, computer equipment and storage medium
CN111767906A (en) * 2020-09-01 2020-10-13 腾讯科技(深圳)有限公司 Face detection model training method, face detection device and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107666386A (en) * 2016-07-27 2018-02-06 复凌科技(上海)有限公司 A kind of data safe transmission method and device
CN107833032A (en) * 2017-10-26 2018-03-23 胡祥义 It is a kind of based on mobile phone without card Bank Account Number implementation method
WO2020125623A1 (en) * 2018-12-20 2020-06-25 上海瑾盛通信科技有限公司 Method and device for live body detection, storage medium, and electronic device
CN111314306A (en) * 2020-01-17 2020-06-19 网易(杭州)网络有限公司 Interface access method and device, electronic equipment and storage medium
CN111400676A (en) * 2020-02-28 2020-07-10 平安国际智慧城市科技股份有限公司 Service data processing method, device, equipment and medium based on sharing authority
CN111597884A (en) * 2020-04-03 2020-08-28 平安科技(深圳)有限公司 Facial action unit identification method and device, electronic equipment and storage medium
CN111680672A (en) * 2020-08-14 2020-09-18 腾讯科技(深圳)有限公司 Face living body detection method, system, device, computer equipment and storage medium
CN111767906A (en) * 2020-09-01 2020-10-13 腾讯科技(深圳)有限公司 Face detection model training method, face detection device and electronic equipment

Also Published As

Publication number Publication date
CN113542527A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN111401558B (en) Data processing model training method, data processing device and electronic equipment
US20230081645A1 (en) Detecting forged facial images using frequency domain information and local correlation
CN108509915B (en) Method and device for generating face recognition model
CN111681091B (en) Financial risk prediction method and device based on time domain information and storage medium
CN113688855A (en) Data processing method, federal learning training method, related device and equipment
JP2022532677A (en) Identity verification and management system
EP4030348A1 (en) Neural network training method, data processing method, and related apparatuses
CN111767906B (en) Face detection model training method, face detection device and electronic equipment
CN110569377A (en) Media file processing method and device
CN111275784A (en) Method and device for generating image
CN115100717A (en) Training method of feature extraction model, and cartoon object recognition method and device
CN113542527B (en) Face image transmission method and device, electronic equipment and storage medium
Sedik et al. An efficient cybersecurity framework for facial video forensics detection based on multimodal deep learning
KR102126795B1 (en) Deep learning-based image on personal information image processing system, apparatus and method therefor
CN114707589A (en) Method, device, storage medium, equipment and program product for generating countermeasure sample
Mao et al. A novel model for voice command fingerprinting using deep learning
CN116152938A (en) Method, device and equipment for training identity recognition model and transferring electronic resources
Nawaz et al. Convolutional long short-term memory-based approach for deepfakes detection from videos
CN111949965A (en) Artificial intelligence-based identity verification method, device, medium and electronic equipment
CN115082873A (en) Image recognition method and device based on path fusion and storage medium
CN117079336B (en) Training method, device, equipment and storage medium for sample classification model
Lakshminarasimha et al. Deep Learning Base Face Anti Spoofing-Convolutional Restricted Basis Neural Network Technique
CN117078789B (en) Image processing method, device, equipment and medium
CN117237856B (en) Image recognition method, device, computer equipment and storage medium
WO2024066927A1 (en) Training method and apparatus for image classification model, and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40054023

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant