CN111488906B - Low-resolution image recognition method based on channel correlation PCANet - Google Patents
Low-resolution image recognition method based on channel correlation PCANet Download PDFInfo
- Publication number
- CN111488906B CN111488906B CN202010147013.7A CN202010147013A CN111488906B CN 111488906 B CN111488906 B CN 111488906B CN 202010147013 A CN202010147013 A CN 202010147013A CN 111488906 B CN111488906 B CN 111488906B
- Authority
- CN
- China
- Prior art keywords
- image
- matrix
- steps
- channel
- feature map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000010586 diagram Methods 0.000 claims description 22
- 230000001419 dependent effect Effects 0.000 claims description 19
- 238000012360 testing method Methods 0.000 claims description 18
- 239000011159 matrix material Substances 0.000 claims description 17
- 239000013598 vector Substances 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 3
- 230000009191 jumping Effects 0.000 claims description 3
- 238000000605 extraction Methods 0.000 abstract description 3
- 238000001914 filtration Methods 0.000 abstract description 2
- 238000005259 measurement Methods 0.000 abstract 2
- 230000006870 function Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
In the characteristic extraction process, bicubic interpolation is carried out on a low-resolution image to be identified, so that the image after interpolation has the same resolution as a training set image; carrying out depth convolution filtering on the interpolated image by adopting channel correlation convolution to obtain a high-dimensional feature map of the input image; along the channel direction of the feature map, compressing and encoding the feature map with a certain step length to obtain a mode map of the input image; extracting local histogram features from the pattern maps, and connecting the local histogram features generated by the pattern maps to form final high-dimensional histogram features; in the classifying process, obtaining distance measurement from an image to be recognized to each training image based on chi-square distance in a high-dimensional histogram feature space; and obtaining a class mark corresponding to the training sample with the minimum distance measurement as the class mark of the image to be identified. The invention can effectively identify the low-resolution input image.
Description
Technical Field
The invention relates to the field of image processing and pattern recognition, in particular to robust image recognition with large difference between an image to be recognized and a training image, which is mainly used for processing and recognizing images in reality.
Background
Recently, in the field of computer vision and image recognition, deep neural networks (Deep Neural Network, DNN), represented by convolutional neural networks (Convolutional Neural Networks, CNN), have met with great success, and on some disclosed data sets, the classification capabilities of leading edge deep learning methods even exceed those of humans, for example: authentication accuracy on LFW face database, image classification accuracy on ImageNet, and handwriting digital recognition accuracy on MNIST, etc. However, in practice, the image to be identified tends to have a large difference in "distribution" or "structure" from the training image, which can cause DNN to suffer from a large-scale recognition error, which phenomenon is called "Covariate Shift" in the field of deep learning. In the problem of the shift of the different covariates, the difference in resolution between the image to be recognized and the training image is particularly remarkable. Typically, the resolution of the image to be identified is much lower than the resolution of the training image, resulting in performance collapse of the existing DNN model.
Disclosure of Invention
In order to overcome the defects of low image recognition accuracy and poor feasibility caused by covariate offset in the existing image recognition method, the invention provides the robust image recognition method based on the channel correlation PCANet (Channel Independent PCANet, CIPCANet) with high accuracy and good feasibility, and CIPCANet can effectively overcome the recognition problem caused by the difference of resolution ratios and can greatly improve the image recognition performance.
The technical scheme adopted for solving the technical problems is as follows:
a low-resolution image recognition method based on channel correlation PCANet comprises the following steps:
step 1 selecting J images A= { A 1 ,…,A J As training set, the corresponding class label isY={Y 1 ,…,Y K And is the set of images to be identified, i.e., the test set, where,respectively represent the C on the real number domain 0 The length and width of the E {1,3} channels are m multiplied by n and mu multiplied by v, mu is less than or equal to m, v is less than or equal to n;
step 2, initializing parameters and input data: order theHere, a->For indicating the stage at which the network is located,indicating that the network is in training phase->Indicating that the network is in a testing stage; let l=0, where l is used to indicate the number of layers of the input image or feature map in the network; let->Wherein n=j,>
step 3 consists ofConstruction of matrix-> Wherein (1)> Is->Is used for the average value of (a), representing from->B e {1,2, …, mn } feature blocks of size k×k extracted from the c-th channel, vec (·) represents the operation of stretching the matrix into column vectors;
step 4 ifIf the network is in the test stage, jumping to the step 7, otherwise, executing the steps 5 to 6;
step 5 calculationMain direction->Wherein (1)>Is covariance matrix->Corresponding to the ith eigenvector of lambda i And->
Step 6 from V (l) Acquisition of C l+1 (C l+1 ≤k 2 C l ) Individual channel dependent filter bank
Step 7 willProjected to W (l+1) :/>And will->The elements in (a) are rewritten in rows as: />Wherein (1)>
Step 8 consists ofComputing a feature map X (l+1) :/>Wherein,,here, a->Representing the matrix +.>Is rearranged by columns into a matrix of size mxn and connects the respective mxn matrices in the channel direction;
step 9, let l=l+1, execute the above steps 3 to 8 until l=l, where L represents a predetermined maximum convolution layer number;
step 10 consists of X (L) Calculating a pattern diagram P: p= { P i,β } i=1,…,N;β=1,…,Β Wherein, the method comprises the steps of, wherein,a beta epsilon {1, …, beta } pattern diagram representing the ith sample,t represents the number of channels involved in the encoding of a single pattern diagram, USF (·) represents a unit step function (Unit Step Function, USF), and the input value is binarized by comparison with 0, i.e.:
step 11 extracts histogram H from pattern diagram P: h= [ H ] i ] i=1,…,N Wherein H is i =[H i,1 ,…,H i,B ] T ,H i,β =Qhist(P i,β ),Qhist(P i,β ) Representing the pattern diagram P i,β Divided into Q blocks, a histogram is extracted from each block, each histogram using 2 T The number of packets, i.e. the code value of the statistical pattern diagram is 2 per block T The frequency of occurrence in the individual packets;
step 12 ifMake H Te =h, jump to step 14; otherwise, let H Tr =h, step 13 is performed;
step 13 orderl=0,/>Wherein, n=k, the method comprises the steps of performing interpolation on an input image on each channel based on a bicubic interpolation method, wherein the size after interpolation is m multiplied by n, and performing the steps 3 to 11;
step 14 calculates a metric matrix m= [ M ] i,j ] i=1,…,J;j=1,…,K Wherein, the method comprises the steps of, wherein,here the number of the elements is the number,
wherein D representsAnd->Length of->Representation->The d element of (a)>Representation ofThe d element of (a);
step 15, calculating class id= [ Id ] of each sample in the test set Y i ] i=1,…,K :
Wherein M is i Represents the ith column vector in the metric matrix M, minIndx (·) represents M i Index of the smallest element in the (c).
The technical conception of the invention is as follows: one major problem with low resolution images, relative to high resolution images, is that the discriminating characteristics that can be extracted from them are far fewer than with high resolution images. In order to efficiently recognize a low resolution image, it is necessary to perform feature compensation. The existing PCANet method utilizes various filters to carry out depth filtering on the low-resolution image, and carries out feature decomposition and feature extraction from different main directions and different levels, so that the distinguishing features of the low-resolution image are enriched. However, the existing PCANet method does not consider the correlation of the feature map in the channel direction. In order to solve the problem, the invention provides a channel-dependent PCANet method: the method can effectively utilize the correlation of the feature map in the channel direction to compensate the output features of the low-resolution image, thereby effectively improving the discriminant of the low-resolution image.
The beneficial effects of the invention are mainly shown in the following steps: the output characteristics of the low-resolution image can be effectively compensated, so that the recognition rate of the low-resolution image is improved.
Drawings
Fig. 1 is a feature map extraction process of the channel-dependent PCANet according to the present invention, wherein,representing a channel dependent convolution (Channel Dependent Convolution, CDC), see steps 7 and 8 of the summary; epsilon represents the coding of the feature map, see step 10 of the summary of the invention; />The method comprises the following steps of (1) extracting the block histogram characteristics of a pattern diagram, and referring to step 11 of the invention content in detail;
FIG. 2 is a classification process of the channel-dependent PCANet according to the present invention;
FIG. 3 is a low resolution test set and high resolution training set sample from an AR face database, where (a) is a test set I sample, (b) is a test set II sample, (c) is a test set III sample, and (d) is a training set sample;
FIG. 4 is a process for extracting feature blocks from the various channels of a feature map, where (a) is the original feature map, (b) is boundary zero padding, (c) is feature block selection, and (d) is the selected multi-channel feature block;
FIG. 5 (a) depicts how the Vec (-) operator stretches the matrix into column vectors, and FIG. 5 (b) depicts how the channels of the same partitioned area stretch into column vectors;
fig. 6 (a) is a one-dimensional illustration of a PCANet filter, and fig. 6 (b) is a one-dimensional illustration of a channel dependent PCANet filter;
fig. 7 (a) is a two-dimensional illustration of a PCANet filter, and fig. 7 (b) is a two-dimensional illustration of a channel dependent PCANet filter;
FIG. 8 is a generated feature map, wherein (a) is a feature map generated by the original PCANet; (b) is a feature map generated by channel dependent PCANet;
fig. 9 is a generated pattern diagram, where (a) is a pattern diagram generated by the original PCANet and (b) is a pattern diagram generated by the channel dependent PCANet.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 to 9, a low resolution image recognition method based on a channel dependent PCANet (Channel Independent PCANet, CIPCANet), the method comprising the steps of:
step 1 selecting J images A= { A 1 ,…,A J As training set, the corresponding class label isY={Y 1 ,…,Y K And is the set of images to be identified, i.e., the test set, where,respectively represent the C on the real number domain 0 The length and width of the E {1,3} channels are m multiplied by n and mu multiplied by v, and mu is less than or equal to m and v is less than or equal to n. Specifically, C 0 =1 represents a gray scale image, C 0 =3 denotes an RGB image. FIG. 3 shows a high resolution training sample set A and a low resolution test set Y from an AR face database;
step 2, initializing parameters and input data: order theHere, a->For indicating the stage at which the network is located,indicating that the network is in training phase->Indicating that the network is in a testing stage; let l=0, where l is used to indicate the number of layers of the input image or feature map in the network; let->Wherein n=j,>
step 3 consists ofConstruction of matrix-> Wherein (1)>Is used for the average value of (a),representing from->B e {1,2, …, mn } feature blocks of size k×k extracted from the c-th channel, vec (·) represents the operation of stretching the matrix into column vectors; FIG. 4 details the process of extracting feature blocks from the various channels of the feature map, and FIG. 5 (a) depicts Vec (-) stretching the matrix intoThe procedure of column vector, FIG. 5 (b) depicts +.>Is established;
step 4 ifIf the network is in the test stage, jumping to the step 7, otherwise, executing the steps 5 to 6;
step 5 calculationMain direction->Wherein (1)>Is covariance matrix->Corresponding to the ith eigenvector of lambda i And->
Step 6 from V (l) Acquisition of C l+1 (C l+1 ≤k 2 C l ) Individual channel dependent filter bankFig. 6 (b) and 7 (b) respectively illustrate a channel dependent filter bank W (2) The one-dimensional and two-dimensional representation of (taking training samples as training set a) given in fig. 3) and comparing with the filter sets of the second convolution layer in the original PCANet (channel independent, see fig. 6 (a) and fig. 7 (a) for details), it can be seen that the filter sets given by the present invention are more rich and diverse;
step 7 willProjected to W (l+1) :/>And will->The elements in (a) are rewritten in rows as: />Wherein (1)>
Step 8 consists ofComputing a feature map X (l+1) :/>Wherein,,here, a->Representing the matrix +.>Is rearranged by columns into a matrix of size mxn and connects the respective mxn matrices in the channel direction; fig. 8 (b) shows a feature map generated by the channel-dependent PCANet, and comparing the feature map with a feature map generated by the original PCANet (see fig. 8 (a)) to see that the feature map generated by the channel-dependent PCANet is more abundant and various;
step 9, let l=l+1, execute the above steps 3 to 8 until l=l, where L represents a predetermined maximum convolution layer number; typically, l=2 or l=3 is set;
step 10 consists of X (L) Calculating a pattern diagram P: p= { P i,β } i=1,…,N;β=1,…,Β Wherein, the method comprises the steps of, wherein,a beta epsilon {1, …, beta } pattern diagram representing the ith sample,t represents the number of channels involved in the encoding of a single pattern (typically set T to 8), USF (·) represents a unit step function (Unit Step Function, USF), and the input value is binarized by comparison with 0, i.e.:
fig. 9 (b) shows a pattern diagram generated by the channel-dependent PCANet, and comparing with a pattern diagram generated by the original PCANet (see fig. 9 (a)) to see that the pattern diagram generated by the channel-dependent PCANet is more rich and various in features;
step 11 extracts histogram H from pattern diagram P: h= [ H ] i ] i=1,…,N Wherein H is i =[H i,1 ,…,H i,B ] T ,H i,β =Qhist(P i,β ),Qhist(P i,β ) Representing the pattern diagram P i,β Divided into Q blocks, a histogram is extracted from each block, each histogram using 2 T The number of packets, i.e. the code value of the statistical pattern diagram is 2 per block T The frequency of occurrence in the individual packets;
step 12 ifMake H Te =h, jump to step 14; otherwise, let H Tr =h, step 13 is performed;
step 13 orderl=0,/>Wherein, n=k, the method is characterized in that the input image is interpolated on each channel based on bicubic interpolation, and the interpolated size is m multiplied by n. Executing the steps 3 to 11;
step 14 calculates a metric matrix m= [ M ] i,j ] i=1,…,J;j=1,…,K Wherein, the method comprises the steps of, wherein,here the number of the elements is the number,
wherein D representsAnd->Length of->Representation->The d element of (a)>Representation ofThe d element of (a);
step 15, calculating class id= [ Id ] of each sample in the test set Y i ] i=1,…,K :
Wherein M is i Represents the ith column vector in the metric matrix M, minIndx (·) represents M i Index of the smallest element in the (c).
Table 1 compares the recognition rate of ICPCANet with that of the existing method (VGG-Face, LCNN, PCANet) for the training set and the test set given in fig. 3, and it can be seen that ICPCANet has an optimal recognition performance, especially when the low resolution of the image to be recognized is low, which is more remarkable.
Table 1.
Claims (1)
1. A method for identifying a low resolution image based on a channel dependent PCANet, the method comprising the steps of:
step 1 selecting J images A= { A 1 ,…,A J As training set, the corresponding class label isY={Y 1 ,…,Y K The number is the set of images to be identified, i.e. the test set, here +.>Respectively represent the C on the real number domain 0 The length and width of the E {1,3} channels are m multiplied by n and mu multiplied by v, mu is less than or equal to m, v is less than or equal to n;
step 2, initializing parameters and input data: order theHere, a->For indicating the stage in which the network is located, +.>Indicating that the network is in training phase->Indicating that the network is in a testing stage; let l=0, where l is used to indicate the number of layers of the input image or feature map in the network; let->Wherein n=j,>
step 3 consists ofConstruction of matrix-> Wherein,, is->Mean value of-> Representing from->B e {1,2, …, mn } feature blocks of size k×k extracted from the c-th channel, vec (·) represents the operation of stretching the matrix into column vectors;
step 4 ifIf the network is in the test stage, jumping to the step 7, otherwise, executing the steps 5 to 6;
step 5 calculationMain direction->Wherein (1)>Is covariance matrix->The i "th eigenvector of (a), the corresponding eigenvalue is lambda i″ And->
Step 6 from V (l) Acquisition of C l+1 (C l+1 ≤k 2 C l ) Individual channel dependent filter bank
Step 7 willProjected to W (l+1) :/>And will->The elements in (a) are rewritten in rows as: />Wherein (1)>
Step 8 consists ofComputing a feature map X (l+1) :/>Wherein (1)>Here, a->Representing the matrix +.>Is rearranged by columns into a matrix of size mxn and connects the respective mxn matrices in the channel direction;
step 9, let l=l+1, execute the above steps 3 to 8 until l=l, where L represents a predetermined maximum convolution layer number;
step 10 consists of X (L) Calculating a pattern diagram P: p= { P i,β } i=1,…,N;β=1,…,B Wherein, the method comprises the steps of, wherein,beta epsilon {1, …, B } pattern representing the ith sample, { 10 }>T represents the number of channels involved in the encoding of a single pattern, USF (·) represents a unit step function, and the input value is binarized by comparison with 0, i.e.:
step 11 extracts histogram H from pattern diagram P: h= [ H ] i ] i=1,…,N Wherein H is i =[H i,1 ,…,H i,B ] T ,H i,β =Qhist(P i,β ),Qhist(P i,β ) Representing the pattern diagram P i,β Divided into Q blocks, a histogram is extracted from each block, each histogram using 2 T The number of packets, i.e. the code value of the statistical pattern diagram is 2 per block T The frequency of occurrence in the individual packets;
step 12 ifMake H Te =h, jump to step 14; otherwise, let H Tr =h, step 13 is performed;
step 13 orderl=0,/>Wherein n=k,> the method comprises the steps of performing interpolation on an input image on each channel based on a bicubic interpolation method, wherein the size after interpolation is m multiplied by n, and performing the steps 3 to 11;
step 14 calculates a metric matrix m= [ M ] i,j ] i=1,…,J;j=1,…,K Wherein, the method comprises the steps of, wherein,here the number of the elements is the number,
wherein D representsAnd->Length of->Representation->The d element of (a)>Representation->The d element of (a);
step 15, calculating class id= [ Id ] of each sample in the test set Y i′ ] i′=1,…,K :
Wherein M is i′ Represents the i' th column vector in the metric matrix M, minIndx (·) represents M i′ Index of the smallest element in the (c).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010147013.7A CN111488906B (en) | 2020-03-05 | 2020-03-05 | Low-resolution image recognition method based on channel correlation PCANet |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010147013.7A CN111488906B (en) | 2020-03-05 | 2020-03-05 | Low-resolution image recognition method based on channel correlation PCANet |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111488906A CN111488906A (en) | 2020-08-04 |
CN111488906B true CN111488906B (en) | 2023-07-25 |
Family
ID=71794381
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010147013.7A Active CN111488906B (en) | 2020-03-05 | 2020-03-05 | Low-resolution image recognition method based on channel correlation PCANet |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111488906B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105718889A (en) * | 2016-01-21 | 2016-06-29 | 江南大学 | Human face identity recognition method based on GB(2D)2PCANet depth convolution model |
CN107133579A (en) * | 2017-04-20 | 2017-09-05 | 江南大学 | Based on CSGF (2D)2The face identification method of PCANet convolutional networks |
WO2018221863A1 (en) * | 2017-05-31 | 2018-12-06 | Samsung Electronics Co., Ltd. | Method and device for processing multi-channel feature map images |
US10527699B1 (en) * | 2018-08-01 | 2020-01-07 | The Board Of Trustees Of The Leland Stanford Junior University | Unsupervised deep learning for multi-channel MRI model estimation |
-
2020
- 2020-03-05 CN CN202010147013.7A patent/CN111488906B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105718889A (en) * | 2016-01-21 | 2016-06-29 | 江南大学 | Human face identity recognition method based on GB(2D)2PCANet depth convolution model |
CN107133579A (en) * | 2017-04-20 | 2017-09-05 | 江南大学 | Based on CSGF (2D)2The face identification method of PCANet convolutional networks |
WO2018221863A1 (en) * | 2017-05-31 | 2018-12-06 | Samsung Electronics Co., Ltd. | Method and device for processing multi-channel feature map images |
US10527699B1 (en) * | 2018-08-01 | 2020-01-07 | The Board Of Trustees Of The Leland Stanford Junior University | Unsupervised deep learning for multi-channel MRI model estimation |
Non-Patent Citations (1)
Title |
---|
基于ACF与PCANet改进通道特征的级联行人检测;黄鹏;于凤芹;;计算机工程(第11期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111488906A (en) | 2020-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113674334B (en) | Texture recognition method based on depth self-attention network and local feature coding | |
CN110082821B (en) | Label-frame-free microseism signal detection method and device | |
CN110751612A (en) | Single image rain removing method of multi-channel multi-scale convolution neural network | |
CN111652273B (en) | Deep learning-based RGB-D image classification method | |
CN111339924B (en) | Polarized SAR image classification method based on superpixel and full convolution network | |
CN112966574A (en) | Human body three-dimensional key point prediction method and device and electronic equipment | |
CN112926533A (en) | Optical remote sensing image ground feature classification method and system based on bidirectional feature fusion | |
CN114387454A (en) | Self-supervision pre-training method based on region screening module and multi-level comparison | |
CN111488906B (en) | Low-resolution image recognition method based on channel correlation PCANet | |
CN111488907B (en) | Robust image recognition method based on dense PCANet | |
CN117152784A (en) | Automatic detection and recognition method for station wiring diagram text based on improved PP-OCRv3 | |
CN117392450A (en) | Steel material quality analysis method based on evolutionary multi-scale feature learning | |
CN110516640B (en) | Vehicle re-identification method based on feature pyramid joint representation | |
CN109741313B (en) | No-reference image quality evaluation method for independent component analysis and convolutional neural network | |
CN115937540A (en) | Image Matching Method Based on Transformer Encoder | |
CN114898464B (en) | Lightweight accurate finger language intelligent algorithm identification method based on machine vision | |
CN116458896A (en) | Electrocardiogram classification method and device based on time sequence feature diagram and attention mechanism | |
CN111488905B (en) | Robust image recognition method based on high-dimensional PCANet | |
CN115984639A (en) | Intelligent detection method for fatigue state of part | |
CN113988154A (en) | Unsupervised decoupling image generation method based on invariant information distillation | |
CN114359786A (en) | Lip language identification method based on improved space-time convolutional network | |
CN109800719B (en) | Low-resolution face recognition method based on sparse representation of partial component and compression dictionary | |
CN117727053B (en) | Multi-category Chinese character single sample font identification method | |
CN115546878B (en) | Face AU detection model establishing method based on attention mechanism and application thereof | |
CN115424051B (en) | Panoramic stitching image quality evaluation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |