CN111222447A - Living body detection method based on neural network and multichannel fusion LBP (local binary pattern) characteristics - Google Patents

Living body detection method based on neural network and multichannel fusion LBP (local binary pattern) characteristics Download PDF

Info

Publication number
CN111222447A
CN111222447A CN201911426011.5A CN201911426011A CN111222447A CN 111222447 A CN111222447 A CN 111222447A CN 201911426011 A CN201911426011 A CN 201911426011A CN 111222447 A CN111222447 A CN 111222447A
Authority
CN
China
Prior art keywords
lbp
neural network
channels
face image
detection method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911426011.5A
Other languages
Chinese (zh)
Inventor
王伟栋
沈修平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI ULUCU ELECTRONIC TECHNOLOGY CO LTD
Original Assignee
SHANGHAI ULUCU ELECTRONIC TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI ULUCU ELECTRONIC TECHNOLOGY CO LTD filed Critical SHANGHAI ULUCU ELECTRONIC TECHNOLOGY CO LTD
Priority to CN201911426011.5A priority Critical patent/CN111222447A/en
Publication of CN111222447A publication Critical patent/CN111222447A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a living body detection method based on neural network and multichannel fusion LBP characteristics, which comprises the following steps: collecting an RGB image and an infrared image containing a face to be detected; preprocessing the two images; respectively obtaining LBP characteristics of the two preprocessed images on a plurality of channels and performing characteristic fusion by adopting a plurality of modes; splicing the fused LBP characteristics of the two images and calculating corresponding histogram characteristics; and inputting the histogram features into a neural network for secondary classification, and judging whether the face to be detected is a living body. Compared with the LBP characteristics obtained by the gray-scale image, the method has the advantages that the more complete local binary characteristics are obtained by fusing the LBP characteristics of a plurality of channels through a plurality of modes, so that the distinguishing capability of the characteristics on the difference between the human face textures is improved; meanwhile, the texture features of the visible light and infrared human face images are combined, and the neural network is used for judging the texture features, so that the accuracy and the robustness of living body judgment are further improved.

Description

Living body detection method based on neural network and multichannel fusion LBP (local binary pattern) characteristics
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a living body detection method based on a neural network and a multichannel fusion LBP (local binary pattern) characteristic.
Background
The face recognition technology is an identity authentication technology which is emerging in recent years, and is widely applied to the fields of safety, finance, personnel management and the like due to the characteristics of high efficiency, accuracy and low cost.
In the current era of information explosion, it is becoming easier to acquire face photos and videos, and a great deal of false face attacks aiming at face recognition technology are generated. Once the false face attack is successful, a great loss is caused to the user. The task of the living body detection technology is to detect the false faces, so that the potential safety hazard of a face recognition system is solved.
Currently, the living body detection technology which is the current face recognition technology generally adopts interactive action living body detection. Namely, the system indicates the user to complete random motion instructions of left turn, right turn, mouth opening, blinking and the like in sequence. If the action made by the user is not consistent with the action indicated by the system, the user is considered to be a false face. The method has high safety and can effectively prevent photo and video attacks, but the method needs action coordination for users and has poor experience.
And the living body detection based on the texture features is a non-interactive silent living body detection method. The method does not need the action cooperation of a user, and only judges the real face and the false face through the difference change of the texture feature information in the face image, but the traditional LBP feature is mainly used for describing the gray level image texture, and ignores the texture feature information of a color image, so that the feature has limited capability of distinguishing the texture difference of the real face and the false face.
Object of the Invention
The invention aims to provide a living body detection method based on a neural network and multi-channel fusion LBP characteristics, which is used for acquiring the LBP characteristics which describe color texture information more fully, improving the resolution capability of the LBP characteristics on the texture characteristics of a true and false face and realizing the effective detection of the true and false face.
The specific technical scheme of the invention is as follows: a living body detection method based on a neural network and multi-channel fusion LBP characteristics collects RGB images and infrared images containing a face to be detected; preprocessing the two images; respectively obtaining LBP characteristics of the two preprocessed images on a plurality of channels and performing characteristic fusion by adopting a plurality of modes; splicing the fused LBP characteristics of the two images and calculating corresponding histogram characteristics; and inputting the histogram features into a neural network for secondary classification, and judging whether the face to be detected is a living body.
Further, the pre-processing is to scale the sizes of the two images to a uniform size.
Furthermore, the feature fusion is to calculate the LBP features of the three channels corresponding to the color face image on the HSV color space, and then to fuse the LBP features of the three channels by using an extreme value mode and a summation mode, respectively, to generate a brand new LBP feature.
Further, the specific method for generating the brand-new LBP feature is as follows:
(1) converting LBP values of three channels of each pixel point into binary sequences;
(2) solving an extreme value of a binary sequence at a corresponding position in three channels of each pixel point according to a bit;
(3) summing binary sequences at corresponding positions in three channels of each pixel point according to bits;
(4) respectively generating an output sequence of an extreme value mode and an output sequence of a summation mode according to the sequence fused by each pixel point through a mapping function;
(5) converting the output sequence into corresponding LBP characteristics;
(6) calculating and normalizing histogram features corresponding to the LBP features;
(7) further, calculating corresponding feature mean values in two modes respectively for histogram features calculated by the color face image;
(8) splicing the histogram feature mean values of the two modes with the histogram features obtained by calculating the infrared face image to form feature vectors for detection;
(9) inputting the feature vectors into a Deep Belief Network (DBN), and training the network by using the cross entropy as a network loss function;
(10) and finally, inputting the color face image and the infrared face image of the user to be detected into the trained DBN, and predicting whether the user to be detected is a false face.
Further, the window size of the LBP operator can be expanded to a circular neighborhood of any radius with the current pixel point as the center, around the neighborhood of 3x3 with the current pixel point as the center.
Furthermore, deep feature maps of multiple channels of the color face image and the infrared face image can be extracted through a pre-trained convolutional neural network and used for subsequent multi-channel fusion LBP features.
Further, the convolutional neural network used for extracting the multi-channel deep feature map may use any one of AlexNet, ZFNet, VGG, ResNet, GoogleNet, or SENet structures.
Furthermore, for the extracted deep feature map, the deep features of the color face image and the deep features of the infrared face image can be spliced in series according to the channel, and then the subsequent LBP feature calculation is carried out.
Further, the deep features with the same channel number are extracted from the color face image and the deep face image and added and fused with the corresponding channels, and the fused deep features are used for subsequent LBP feature calculation.
Technical effects
1) The invention is a non-interactive living body detection method, which does not need the action coordination of the user;
2) aiming at the defect of insufficient expressive force of the traditional LBP characteristics on the texture characteristics of the color image, the multi-channel fusion LBP characteristics have the expressive force on the texture characteristics of the color image, meanwhile, the connection of each pixel point on the color image in each channel is considered, and the fused characteristics have less redundancy;
3) and the LBP characteristics of the infrared face image are combined, so that the distinguishing capability of different texture information is further improved.
4) For large-scale data samples, the neural network has strong robustness and nonlinear mapping capability. Therefore, compared with the traditional SVM classifier, the neural network has higher accuracy in classifying.
Drawings
Fig. 1 is a schematic diagram of a conventional LBP feature calculation method.
Fig. 2 images of LBP features.
FIG. 3 is a diagram illustrating an example of fusion of two fusion modes in the example.
Fig. 4 is a flow chart of the method of the present invention.
Fig. 5 is a structural diagram of a DBN.
Detailed Description
The following further describes the embodiments of the present invention with reference to the drawings:
the invention provides a living body detection method based on a neural network and a multi-channel fusion LBP characteristic.
Fig. 1 is a schematic diagram of a conventional LBP feature calculation method. The calculation process is as follows:
the LBP operator is defined as that in each window of 3 × 3 on the gray image, the central pixel of the window is used as a threshold value, the gray values of the adjacent 8 pixels are compared with the central pixel, if the surrounding pixel values are greater than or equal to the central pixel value, the position of the pixel point is mapped to 1, otherwise, the position is 0. As shown in the figure, 8 points in the 3 × 3 neighborhood can generate 8-bit binary numbers through comparison, the 8-bit binary numbers are ordered from the upper left corner point in the clockwise direction to be the LBP binary sequence of the window center pixel point, and the corresponding decimal number is the LBP value of the window center pixel point. The LBP values of all the pixel points in the image are calculated to obtain the corresponding LBP features, and it can be known from fig. 2 that the LBP features of the image are still an "image".
The method for fusing the LBP characteristics of the multiple channels firstly calculates the corresponding LBP characteristics for the three channels of the color image through the traditional LBP characteristic calculation method, and then fuses the LBP characteristics of the three channels through two modes to generate brand new LBP characteristics. The two fusion modes are as follows:
1) extreme mode
The LPB values of each pixel point in the original image on three channels are converted into three groups of binary sequences, and extreme values are obtained according to bits. The formula is as follows:
Figure BDA0002353355500000031
2) summation mode
The binary sequences are summed bitwise. The formula is as follows:
Figure BDA0002353355500000032
in the above formula, the first and second carbon atoms are,
Figure BDA0002353355500000033
and
Figure BDA0002353355500000034
respectively representing an extremum mapping function and a summing mapping function. max (x)1,x2,…,xn) Is a maximum function for solving for x1,x2,…,xnMaximum value of (2).
Figure BDA0002353355500000041
Represents the mapping value of the nth pixel point in the neighborhood centered on pixel point (x, y) on channel c,
further, the value of each bit in the fused sequence is remapped to the binary space through the following mapping function,
Figure BDA0002353355500000042
Figure BDA0002353355500000043
Figure BDA0002353355500000044
and
Figure BDA0002353355500000045
representing mapping functions of an extremum mode and a summation mode, respectively. Wherein t ismaxAnd tsumIs each pixelAnd fusing the mapping values of the points under the extreme value mode and the summation mode. Fig. 3 is a diagram illustrating a fusion example of two fusion modes, and it can be understood that in the extreme value mode,
Figure BDA0002353355500000046
has a value range of [0,1 ]](ii) a In the summing mode, the first and second outputs are,
Figure BDA0002353355500000047
in the mapping process, t is specifiedmaxE {0,1} and tsumE {0,1,2,3}, and 2 extremum pattern outputs and 4 summation pattern outputs are generated for the LBP values of the three channels of any pixel point in the color image. Further, a color face image can generate 6 LBP features by the above method: LBP1 max,
Figure BDA0002353355500000048
LBP1 sum,
Figure BDA0002353355500000049
Fig. 4 is a flow chart of the method of the present invention. The method comprises the following specific steps:
1) and collecting color face images and infrared face images for training the DBN.
2) All face images were scaled to 256x256 using nearest neighbor interpolation.
3) And converting the color face image from the RGB color space to the HSV color space.
4) And calculating LBP characteristics of the infrared face image.
5) The LBP features of the color face image on the H, S, V three color channels are calculated respectively. The calculation formula is as follows:
Figure BDA00023533555000000410
wherein the content of the first and second substances,
Figure BDA00023533555000000411
and
Figure BDA00023533555000000412
respectively representing the mapping value and the pixel value of the nth pixel point in the neighborhood taking the pixel point (x, y) as the center on the channel c, fc(x, y) represents the pixel value of the pixel point (x, y) on the channel c.
6) And fusing the LBP characteristics of the three channels through an extreme value mode and a summation mode.
7) Further, combining the fused LBP features with LBP features of the infrared face image obtained before, calculating a histogram corresponding to each LBP feature and normalizing according to the image size, wherein the calculation formula is as follows:
Figure BDA0002353355500000051
wherein W and H represent the width and height of the face image, respectively,
Figure BDA0002353355500000052
an LBP value representing the position (x, y) of the nth LBP feature obtained by fusing the patterns m; v is the value of the pixel point, and the value range is [0,255 ]];
Figure BDA0002353355500000053
Representing the number of pixel values v contained in the nth LBP feature obtained by fusing the patterns m; i (x, y) is an indicator function, and the specific expression is as follows:
Figure BDA0002353355500000054
8) and finally, averaging the normalized histogram according to a corresponding fusion mode, and performing serial splicing to obtain the normalized histogram as a feature vector for training a classifier. The expression of the feature vector is as follows:
Figure BDA0002353355500000055
wherein HIRA histogram representing an infrared face image.
9) And predicting the truth and the falseness of the face by using the DBM as a classifier, acquiring the characteristic vectors of all the collected face images by the method, inputting the characteristic vectors into the DBM, and calculating corresponding cross entropy as a loss function to train the DBM according to the prediction result of the DBM and the label information of the face images. The structure of the DBN is shown in fig. 5.
10) And calculating a characteristic vector according to the color image and the infrared image of the face to be detected, and inputting the characteristic vector to a trained DBN (database network) to judge the truth of the face to be detected.
Alternative solutions
As an optional implementation method, the window size of the LBP operator may be expanded to a circular neighborhood of any radius with the current pixel point as the center, from a neighborhood of 3 × 3 around the current pixel point as the center.
As an optional implementation method, deep feature maps of multiple channels of the color face image and the infrared face image can be extracted through a pre-trained convolutional neural network and used for subsequent multi-channel fusion LBP features.
In combination with the above optional implementation method, the convolutional neural network used for extracting the multi-channel deep feature map may use any one of AlexNet, ZFNet, VGG, ResNet, GoogleNet, or SENet structures.
By combining the optional implementation method, for the extracted deep feature map, the deep features of the color face image and the deep features of the infrared face image can be spliced in series according to the channel, and then the subsequent LBP feature calculation is carried out.
And extracting the deep features with the same channel number for the color face image and the deep face image, adding and fusing the deep features with corresponding channels, and using the fused deep features for subsequent LBP feature calculation.

Claims (9)

1. A living body detection method based on a neural network and multi-channel fusion LBP characteristics collects RGB images and infrared images containing a face to be detected; preprocessing the two images; respectively obtaining LBP characteristics of the two preprocessed images on a plurality of channels and performing characteristic fusion by adopting a plurality of modes; splicing the fused LBP characteristics of the two images and calculating corresponding histogram characteristics; and inputting the histogram features into a neural network for secondary classification, and judging whether the face to be detected is a living body.
2. The live body detection method based on neural network and multi-channel fusion LBP characteristics as claimed in claim 1, wherein said preprocessing is scaling the sizes of two images to a uniform size.
3. The in-vivo detection method based on the neural network and the multi-channel fusion LBP characteristic as claimed in claim 1, wherein the characteristic fusion is to calculate the LBP characteristics of the corresponding three channels of the color face image on an HSV color space, and then to fuse the LBP characteristics of the three channels by respectively adopting an extreme value mode and a summation mode to generate a brand new LBP characteristic.
4. The in-vivo detection method based on neural network and multi-channel fusion LBP characteristics as claimed in claim 3, wherein said specific method for generating new LBP characteristics is as follows:
(1) converting LBP values of three channels of each pixel point into binary sequences;
(2) solving an extreme value of a binary sequence at a corresponding position in three channels of each pixel point according to a bit;
(3) summing binary sequences at corresponding positions in three channels of each pixel point according to bits;
(4) respectively generating an output sequence of an extreme value mode and an output sequence of a summation mode according to the sequence fused by each pixel point through a mapping function;
(5) converting the output sequence into corresponding LBP characteristics;
(6) calculating and normalizing histogram features corresponding to the LBP features;
(7) further, calculating corresponding feature mean values in two modes respectively for histogram features calculated by the color face image;
(8) splicing the histogram feature mean values of the two modes with the histogram features obtained by calculating the infrared face image to form feature vectors for detection;
(9) inputting the feature vectors into a Deep Belief Network (DBN), and training the network by using the cross entropy as a network loss function;
(10) and finally, inputting the color face image and the infrared face image of the user to be detected into the trained DBN, and predicting whether the user to be detected is a false face.
5. The in-vivo detection method based on neural network and multi-channel fusion LBP characteristics as claimed in claim 4, wherein the window size of LBP operator is expanded from a neighborhood of 3x3 around the current pixel point as the center to a circular neighborhood of arbitrary radius around the current pixel point as the center.
6. The in-vivo detection method based on neural network and multi-channel fusion LBP characteristics as claimed in claim 4, wherein deep feature maps of a plurality of channels of the color face image and the infrared face image are extracted through a pre-trained convolutional neural network for subsequent multi-channel fusion LBP characteristics.
7. The in-vivo detection method based on the neural network and the multichannel fusion LBP characteristics as claimed in claim 4, wherein the convolutional neural network used for extracting the multichannel deep characteristic diagram can use any one structure of AlexNet, ZFNET, VGG, ResNet, GoogleNet or SENEt.
8. The in-vivo detection method based on the neural network and the multi-channel fusion LBP characteristics as claimed in claim 4, wherein for the extracted deep characteristic map, the deep characteristics of the color face image and the deep characteristics of the infrared face image can be spliced in series according to the channels, and then the subsequent LBP characteristic calculation is performed.
9. The living body detection method based on neural network and multi-channel fusion LBP characteristics as claimed in claim 4, wherein the same number of channels of deep characteristics are extracted from the color face image and the deep face image and added and fused with the corresponding channels, and the fused deep characteristics are used for the subsequent LBP characteristic calculation.
CN201911426011.5A 2019-12-31 2019-12-31 Living body detection method based on neural network and multichannel fusion LBP (local binary pattern) characteristics Pending CN111222447A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911426011.5A CN111222447A (en) 2019-12-31 2019-12-31 Living body detection method based on neural network and multichannel fusion LBP (local binary pattern) characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911426011.5A CN111222447A (en) 2019-12-31 2019-12-31 Living body detection method based on neural network and multichannel fusion LBP (local binary pattern) characteristics

Publications (1)

Publication Number Publication Date
CN111222447A true CN111222447A (en) 2020-06-02

Family

ID=70828063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911426011.5A Pending CN111222447A (en) 2019-12-31 2019-12-31 Living body detection method based on neural network and multichannel fusion LBP (local binary pattern) characteristics

Country Status (1)

Country Link
CN (1) CN111222447A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967344A (en) * 2020-07-28 2020-11-20 南京信息工程大学 Refined feature fusion method for face forgery video detection
CN113436281A (en) * 2021-06-16 2021-09-24 中国电子科技集团公司第五十四研究所 Remote sensing image sample processing method fused with LBP (local binary pattern) characteristics
CN113724091A (en) * 2021-08-13 2021-11-30 健医信息科技(上海)股份有限公司 Insurance claim settlement method and device
CN117172783A (en) * 2023-07-17 2023-12-05 湖北盈嘉集团有限公司 Credit and debt cross checking system for confirming account receivables

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967344A (en) * 2020-07-28 2020-11-20 南京信息工程大学 Refined feature fusion method for face forgery video detection
CN111967344B (en) * 2020-07-28 2023-06-20 南京信息工程大学 Face fake video detection oriented refinement feature fusion method
CN113436281A (en) * 2021-06-16 2021-09-24 中国电子科技集团公司第五十四研究所 Remote sensing image sample processing method fused with LBP (local binary pattern) characteristics
CN113436281B (en) * 2021-06-16 2022-07-12 中国电子科技集团公司第五十四研究所 Remote sensing image sample processing method fused with LBP (local binary pattern) characteristics
CN113724091A (en) * 2021-08-13 2021-11-30 健医信息科技(上海)股份有限公司 Insurance claim settlement method and device
CN117172783A (en) * 2023-07-17 2023-12-05 湖北盈嘉集团有限公司 Credit and debt cross checking system for confirming account receivables
CN117172783B (en) * 2023-07-17 2024-05-07 湖北盈嘉集团有限公司 Credit and debt cross checking system for confirming account receivables

Similar Documents

Publication Publication Date Title
CN107423701B (en) Face unsupervised feature learning method and device based on generative confrontation network
CN111222447A (en) Living body detection method based on neural network and multichannel fusion LBP (local binary pattern) characteristics
CN111401372B (en) Method for extracting and identifying image-text information of scanned document
CN107239730B (en) Quaternion deep neural network model method for intelligent automobile traffic sign recognition
CN102103698B (en) Image processing apparatus and image processing method
CN110472519B (en) Human face in-vivo detection method based on multiple models
CN112418041B (en) Multi-pose face recognition method based on face orthogonalization
CN110619628A (en) Human face image quality evaluation method
KR20170006355A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
CN104408733B (en) Object random walk-based visual saliency detection method and system for remote sensing image
CN109740572A (en) A kind of human face in-vivo detection method based on partial color textural characteristics
CN115330623A (en) Image defogging model construction method and system based on generation countermeasure network
CN111832405A (en) Face recognition method based on HOG and depth residual error network
CN111815528A (en) Bad weather image classification enhancement method based on convolution model and feature fusion
CN111639580A (en) Gait recognition method combining feature separation model and visual angle conversion model
CN114842524B (en) Face false distinguishing method based on irregular significant pixel cluster
CN110503049B (en) Satellite video vehicle number estimation method based on generation countermeasure network
CN115527276A (en) Deep pseudo video detection method based on fusion of facial optical flow field and texture characteristics
CN115880566A (en) Intelligent marking system based on visual analysis
CN112487926A (en) Scenic spot feeding behavior identification method based on space-time diagram convolutional network
CN109800771B (en) Spontaneous micro-expression positioning method of local binary pattern of mixed space-time plane
CN115147450B (en) Moving target detection method and detection device based on motion frame difference image
CN106548130A (en) A kind of video image is extracted and recognition methods and system
CN114882582A (en) Gait recognition model training method and system based on federal learning mode
CN116152932A (en) Living body detection method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination