CN116051948A - Fine granularity image recognition method based on attention interaction and anti-facts attention - Google Patents
Fine granularity image recognition method based on attention interaction and anti-facts attention Download PDFInfo
- Publication number
- CN116051948A CN116051948A CN202310212744.9A CN202310212744A CN116051948A CN 116051948 A CN116051948 A CN 116051948A CN 202310212744 A CN202310212744 A CN 202310212744A CN 116051948 A CN116051948 A CN 116051948A
- Authority
- CN
- China
- Prior art keywords
- attention
- feature
- map
- representing
- channel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000003993 interaction Effects 0.000 title claims abstract description 25
- 230000000295 complement effect Effects 0.000 claims abstract description 41
- 230000007246 mechanism Effects 0.000 claims abstract description 38
- 230000004927 fusion Effects 0.000 claims abstract description 27
- 238000010586 diagram Methods 0.000 claims description 53
- 239000011159 matrix material Substances 0.000 claims description 50
- 239000013598 vector Substances 0.000 claims description 44
- 230000006870 function Effects 0.000 claims description 32
- 230000004913 activation Effects 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000000873 masking effect Effects 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000002401 inhibitory effect Effects 0.000 claims description 3
- 230000001629 suppression Effects 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 2
- 238000013461 design Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000001364 causal effect Effects 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of image processing, and discloses a fine-granularity image recognition method based on attention interaction and anti-facts, which is characterized in that after image features are extracted, the spatial distribution of each part of an object is learned through a spatial attention mechanism, complementary features are captured through a self-channel feature interaction fusion module and fused with key features so as to improve recognition performance, an anti-facts region is positioned through an enhanced anti-facts attention mechanism module, prediction results of a critical distinguishing region and the anti-facts region are subjected to difference, and the difference is used as a strong attention supervision signal, so that the ability of effective attention of network learning is improved. The method provided by the invention can be used for effectively improving the identification precision of the fine-grained image.
Description
Technical Field
The invention belongs to the technical field of image processing, relates to a deep learning and fine-granularity image recognition technology, and particularly relates to a fine-granularity image recognition method based on attention interaction and inverse fact attention.
Background
Fine-grained image recognition, also referred to as sub-category image recognition, differs from traditional image recognition in that it is intended to distinguish between different sub-categories belonging to one category. The different subclasses are often too similar, and meanwhile, because of interference factors such as gestures, illumination, shielding, background and the like, the images with fine granularity have similar appearance and shape, and have the characteristics of small inter-class difference and large intra-class difference. In view of the high requirements for image recognition accuracy in reality, fine-grained image recognition becomes an important research direction of computer vision.
Early fine-grained image recognition methods addressed this problem by human annotated bounding boxes/region annotations (e.g., bird head, body) for region-based feature representation. However, specialized knowledge and a lot of annotation time are required in the tagging process. Thus, the strongly supervised approach, which takes a lot of time and resources to annotate, is not optimal for the actual fine-grained image recognition task. To address this problem, research has focused on weak supervision methods that provide only class labels, learning distinguishing features by locating different sites. At present, research methods for fine-grained image recognition focus on enlarging and cropping locally distinguishable regions. Specifically, in the method, an attention mechanism branch network is added in a feature extraction network for learning attention weight, after the feature extraction network extracts features from an input image, a feature map is used as the input of the attention mechanism branch network to obtain an attention feature map, the attention feature map and an original feature map are fused to strengthen key features, and then the key features are amplified and cut, so that fine-grained features which are more beneficial to recognition tasks are enhanced.
This common approach of magnifying and cropping critical areas using the attention mechanism, while achieving some results, still has several key issues. Specifically, the existing fine-granularity image recognition method mainly attaches weights to the characteristics of different channels through an attention mechanism, strengthens the channels with strong distinguishing property to locate key areas, and ignores complementarity among the channels; and the attention mechanism module is only supervised by the loss function, lacks a powerful supervision signal to guide the learning process, and ignores the causal relationship between the prediction result and the attention.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention provides a fine granularity image recognition method based on attention interaction and anti-fact attention, optimizes an attention mechanism by maximizing the difference between the anti-fact attention and the fact attention, and effectively utilizes authentication features and complementary information to participate in recognition together so as to improve recognition accuracy. Specifically, (1) first, aiming at the problems that the prior method ignores finer complementary information and effectively utilizes identification features and complementary information to participate in identification together, a self-channel feature interaction fusion module is provided, the module models interaction between different channels of an image, the complementary features of the channels can be captured for each channel, and then the complementary features and key features are fused to obtain fusion features; secondly, key features and fusion features are effectively utilized to participate in recognition by introducing a sorting loss function, so that recognition accuracy is improved; (2) Aiming at the problem that an attention mechanism lacks a powerful supervision signal to guide a learning process and ignores the causal relation between a prediction result and attention, the invention designs a module for enhancing a negative fact attention mechanism, and quantifies the quality of attention by comparing the influence of facts (learned attention) and negative facts (irrelevant attention) on a final prediction result; and then, the difference is maximized, the network learning is promoted to be more effective, the unilateral influence of the training set is reduced, and the recognition precision is improved.
In order to solve the technical problems, the invention adopts the following technical scheme:
the fine granularity image recognition method based on attention interaction and counter-facts attention comprises the following steps:
step 1: feature extraction:
inputting the image I into a feature extraction network to obtain a feature mapWherein C, H, W are the height, width and number of channels of the feature map, respectively.
Step 2: the spatial distribution of each part of the object is learned through a spatial attention mechanism:
the feature map F obtained in the step 1 is used for learning the spatial distribution of each part of the object through a spatial attention mechanism and is expressed asWhere M represents the number of attentiveness, the attentiveness force map A may be calculated as: />;
wherein Attention seeking to indicate coverage of a local area, +.>Representing the spatial attention mechanism, consisting of a convolution layer and a ReLU activation function.
Step 3: capturing complementary features through a self-channel feature interaction fusion module and fusing the complementary features with key features:
inputting the attention map A obtained in the step 2 into a self-channel feature interaction fusion module, extracting complementary features by exploring channel correlation in the image, and fusing the complementary features with key features; the specific method comprises the following steps:
Then willAnd->Bilinear interpolation operation is carried out to obtain bilinear matrix +.>By using a bilinear matrixAdding negative sign before, and obtaining weight matrix by softmax function>:
wherein Representation->Transpose of->Representing the spatial relationship between channel i and channel j.
The weight matrix W and the feature matrixMultiplying to obtain a feature matrix comprising complementary features>:
Matrix the featuresConversion to an attention seeking force comprising complementary features>And fusing with attention seeking A to obtain +.>:
Step 4: constructing a counterfactual attention profile from the attention profile a obtained in step 2:
Masking key areas in the attention map A to obtain a mask mapIn->The position of the key area has been blocked by +.>To construct a counterfactual attention seeking->。
Step 5: converting the feature map into feature vectors:
converting the attention force diagram, the fusion attention force diagram and the inverse fact attention diagram obtained in the step 2, the step 3 and the step 4 into feature matrixes respectively; after the corresponding feature matrix is obtained, the corresponding feature matrix is converted into a feature vector through a full connection layer.
Step 6: calculating loss:
and (5) calculating loss according to the feature vector obtained in the step (5), and optimizing the model.
And (5) repeating the training steps 2-6.
Further, in step 2, the feature diagram F obtained through step 1 is input into an attention mechanism module to obtain an attention force diagram, where the attention mechanism module includes a channel attention mechanism module and a spatial attention mechanism module, and the specific steps are as follows:
first, the feature diagram F is input into a channel attention mechanism module to obtain a channel attention diagram:
wherein Characteristic map of c-th channel, +.>Representing the eigenvectors of the c-th channel, and z represents the eigenvectors of all channels.
Weighting the feature vector z to obtain a weight vector s:
wherein Representing ReLU activation function, +.>、/>Are all parameters, wherein->、/>R represents the dimension-reducing super of the channelParameters.
After the weight vector s is obtained, the feature map F and the weight vector s are fused to obtain a channel attention map:
wherein The representative weight vector s and the feature map F are channel-level multiplied to obtain a channel attention map.
Channel attention is soughtInput to the spatial attention module, capture attention in the spatial dimension, get attention strive a:
wherein Comprising a 1 x 1 convolution kernel, a normalization layer and a ReLU activation function byAn attention map a containing both channel and space dimensions is then obtained.
Further, in step 4, the specific steps of constructing the counterfacts attention map are as follows:
wherein Representing the index of attention force diagram A in spatial position +.>Position-corresponding value,/->For a set threshold value, if->The value of (2) is greater than a threshold +.>The value of the corresponding position is multiplied by the suppression factor +.>Shielding, inhibiting factor->Is a super parameter; if->The value of (2) is less than or equal to the threshold +.>The value of the corresponding position is unchanged.
By the above method, a mask pattern is obtainedIn->The position of the key region has been blocked byTo construct a counterfactual attention seeking->:
Where random (a) represents the generation of a corresponding random feature map from the attention map a, random_map represents the random feature map, and it is represented that in the feature map random_map, the critical and non-critical regions are random.
After obtaining random feature map random_map, combining random_map withMultiplication is performed to obtain a counterfactual attention seeking diagram>:
In which attention is sought after against factsIn (1) due to->So that the critical area is blocked, so that the random_map can only be applied to the non-critical area, then +.>The critical area in (a) is the irrelevant area.
Further, in step 5, the specific step of converting the feature map into the feature vector is as follows:
converting the attention force diagram, the complementary attention force diagram and the inverse fact force diagram obtained in the step 2, the step 3 and the step 4 into feature matrixes respectively:
wherein feature_matrix represents the feature matrix of attention diagram A, feature_complex_matrix represents the fused attention diagramFeature_counter_matrix representing the inverse fact strive for +.>Normal () represents the normalization operation and einsurm () represents the attention-seeking diagram a, the complementary attention-seeking diagram, the inverse fact-seeking diagram +.>Multiplied by the feature map F and converted into a feature matrix.
After the corresponding feature matrix is obtained, the corresponding feature matrix is converted into feature vectors through a full connection layer:
wherein ,feature vector representing attention seeking, ∈>Feature vector representing complementary attention profile, +_>A feature vector representing the difference between attention and countermeasures.
In step 6, the loss function is divided into two parts to optimize the model, firstly, the sorting loss function is introduced to effectively utilize the key features and the fusion features to participate in recognition together, and the calculation formula is as follows:
wherein Is super-parameter (herba Cinchi Oleracei)>Representing the ranking lossLoss by ordering loss->The model can promote->Priority of (i.e.)>。
Secondly, a cross entropy loss function is introduced to optimize the model, and a calculation formula is as follows:
wherein ,representing a cross entropy loss function, ">Representing loss of attention seeking, ->Representing loss of fusion attention map, ++>Loss representing the difference between an attention attempt and a counterfacts attention attempt, ++>Representing the transpose of the real label vector.
Combining the above-mentioned loss functions into an overall loss function L:
compared with the prior art, the invention has the advantages that:
(1) According to the invention, the key region is considered by using an attention mechanism, the complementary region is considered by designing the self-channel feature interaction fusion module, the self-channel feature interaction fusion module models the correlation among the channels of the feature map to extract complementary features, and the complementary features and the key features are fused to obtain a fusion attention map containing both the key features and the complementary features so as to improve the recognition performance;
(2) The invention designs a mechanism module for enhancing the anti-facts attention to locate the anti-facts area, makes a difference between the prediction results of the critical distinguishing area and the anti-facts area, takes the difference value as an attention powerful supervision signal, and the powerful supervision signal guides a network model to learn more effective attention, improves the ability of learning effective attention and improves the recognition accuracy, which is not considered in the prior art.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a system architecture diagram of the present invention;
FIG. 3 is a schematic diagram of a self-channel feature interaction fusion module according to the present invention;
FIG. 4 is a schematic diagram of an enhanced anti-facts attention mechanism of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and specific examples.
Referring to fig. 1 and 2, the present embodiment provides a fine granularity image recognition method based on attention interaction and anti-facts attention, which includes the following steps:
step 1: feature extraction:
inputting the image I into a feature extraction network to obtain a feature mapC, H, W of themOther features are the height, width and number of channels of the feature map.
Step 2: the spatial distribution of each part of the object is learned through a spatial attention mechanism:
the feature map F obtained in the step 1 is used for learning the spatial distribution of each part of the object through a spatial attention mechanism and is expressed asWhere M represents the number of attentiveness, the attentiveness force map A may be calculated as: />;
wherein Attention seeking to indicate coverage of a local area, +.>Representing the spatial attention mechanism, consisting of a convolution layer and a ReLU activation function.
In a preferred embodiment, in step 2, the feature map F obtained through step 1 is input into an attention mechanism module, and attention is sought, where the attention mechanism module includes a channel attention mechanism module and a spatial attention mechanism module, and the specific steps are as follows:
first, the feature diagram F is input into a channel attention mechanism module to obtain a channel attention diagram:
wherein Characteristic map of c-th channel, +.>Feature vector representing the c-th channel, z representing allFeature vector of the channel.
Weighting the feature vector z to obtain a weight vector s:
wherein Representing ReLU activation function, +.>、/>Are all parameters, wherein->、/>R represents the super parameter of channel dimension reduction.
After the weight vector s is obtained, the feature map F and the weight vector s are fused to obtain a channel attention map:
wherein The representative weight vector s and the feature map F are channel-level multiplied to obtain a channel attention map.
Channel attention is soughtInput to the spatial attention module, capture attention in the spatial dimension, get attention strive a:
wherein Comprises a 1X 1 convolution kernel, a normalization layer and a ReLU activation function by +.>An attention map a containing both channel and space dimensions is then obtained.
Step 3: capturing complementary features through a self-channel feature interaction fusion module and fusing the complementary features with key features:
the attention map A obtained in the step 2 is input to a self-channel feature interaction fusion module, and the channel correlation in the image is explored to extract fine complementary features, and the complementary features are fused with key features.
In combination with the self-channel feature interaction fusion module shown in fig. 3, the specific method is as follows:
Then willAnd->Bilinear interpolation operation is carried out to obtain bilinear matrix +.>By double-sidedLinear matrixAdding negative sign before, and obtaining weight matrix by softmax function>:
wherein Representation->Transpose of->Representing the spatial relationship between channel i and channel j. According to the definition of the weight matrix W, the channels with larger weights tend to be associated with +.>Semantically complementary. For example, a->Focusing on the bird's head, the channel that highlights the complement is weighted more, such as the wings of the bird, while the channel that highlights the bird's head is weighted less.
The weight matrix W and the feature matrixMultiplying to obtain a feature matrix comprising complementary features>:/>
Matrix the featuresConversion to an attention seeking force comprising complementary features>And fusing with attention seeking A to obtain +.>:
Step 4: constructing a counterfactual attention profile from the attention profile a obtained in step 2:
Masking key areas in the attention map A to obtain a mask mapIn->The position of the key area has been blocked by +.>To construct a counterfactual attention seeking->。
The enhanced countering attention mechanism module shown in connection with fig. 4 is specifically as follows:
wherein Representing the index of attention force diagram A in spatial position +.>Position-corresponding value,/->For a set threshold value, if->The value of (2) is greater than a threshold +.>The value of the corresponding position is multiplied by the suppression factor +.>Shielding, inhibiting factor->Is a super parameter; if->The value of (2) is less than or equal to the threshold +.>The value of the corresponding position is unchanged.
By the above method, a mask pattern is obtainedIn->The position of the key region has been blocked byTo construct the counterfactual attentionFigure->:
Where random (a) represents the generation of a corresponding random feature map from the attention map a, random_map represents the random feature map, and it is represented that in the feature map random_map, the critical and non-critical regions are random.
After obtaining random feature map random_map, combining random_map withMultiplication is performed to obtain a counterfactual attention seeking diagram>:
In which attention is sought after against factsIn (1) due to->So that the critical area is blocked, so that the random_map can only be applied to the non-critical area, then +.>The critical area in (a) is the irrelevant area.
Step 5: converting the feature map into feature vectors:
converting the attention force diagram, the fusion attention force diagram and the inverse fact attention diagram obtained in the step 2, the step 3 and the step 4 into feature matrixes respectively:
wherein feature_matrix represents the feature matrix of attention diagram A, feature_complex_matrix represents the fused attention diagramFeature_counter_matrix representing the inverse fact strive for +.>Normal () represents the normalization operation and einsurm () represents the attention-seeking diagram a, the complementary attention-seeking diagram, the inverse fact-seeking diagram +.>Multiplied by the feature map F and converted into a feature matrix. />
After the corresponding feature matrix is obtained, the corresponding feature matrix is converted into feature vectors through a full connection layer:
wherein ,feature vector representing attention seeking, ∈>Feature vector representing complementary attention profile, +_>A feature vector representing the difference between attention and countermeasures.
Step 6: calculating loss:
and (5) calculating loss according to the feature vector obtained in the step (5), and optimizing the model. The loss function is divided into two parts to optimize the model, firstly, the sorting loss function is introduced to effectively utilize key features and fusion features to participate in identification together, and the calculation formula is as follows:
wherein Is super-parameter (herba Cinchi Oleracei)>Represents the ordering penalty by ordering penalty->The model can promote->Priority of (i.e.)>The purpose of this design is to force the fusion attention to produce more accurate predictions in order for the network to reference the predictions that the attention is attempting to produce. With this regularization method, the network can learn to identify fine-grained images by adaptively considering feature priorities.
Secondly, a cross entropy loss function is introduced to optimize the model, and a calculation formula is as follows:
wherein ,representing a cross entropy loss function, ">Representing loss of attention seeking, ->Representing loss of fusion attention map, ++>Loss representing the difference between an attention attempt and a counterfacts attention attempt, ++>Representing the transpose of the real label vector.
Combining the above-mentioned loss functions into an overall loss function L:
and (5) repeating the training steps 2-6.
After model training is completed, the image to be identified is input, so that high-accuracy identification can be realized.
It should be understood that the above description is not intended to limit the invention to the particular embodiments disclosed, but to limit the invention to the particular embodiments disclosed, and that various changes, modifications, additions and substitutions can be made by those skilled in the art without departing from the spirit and scope of the invention.
Claims (5)
1. The fine-granularity image recognition method based on attention interaction and counter-fact attention is characterized by comprising the following steps of:
step 1: feature extraction:
inputting the image I into a feature extraction network to obtain a feature mapWherein C, H, W is the height, width and number of channels of the feature map, respectively;
step 2: the spatial distribution of each part of the object is learned through a spatial attention mechanism:
the feature map F obtained in the step 1 is used for learning the spatial distribution of each part of the object through a spatial attention mechanism and is expressed asWhere M represents the number of attentiveness, the attentiveness force map A may be calculated as:
wherein Attention seeking to indicate coverage of a local area, +.>Representing a spatial attention mechanism consisting of a convolution layer and a ReLU activation function;
step 3: capturing complementary features through a self-channel feature interaction fusion module and fusing the complementary features with key features:
inputting the attention map A obtained in the step 2 into a self-channel feature interaction fusion module, extracting complementary features by exploring channel correlation in the image, and fusing the complementary features with key features; the specific method comprises the following steps:
Then willAnd->Bilinear interpolation operation is carried out to obtain bilinear matrix +.>By adding a bilinear matrix>Adding negative sign before, and obtaining weight matrix by softmax function>:
wherein Representation->Transpose of->Representing the spatial relationship between channel i and channel j;
the weight matrix W and the feature matrixMultiplying to obtain a feature matrix comprising complementary features>:
Matrix the featuresConversion to an attention seeking force comprising complementary features>And fusing with attention seeking A to obtain +.>:
wherein Representing a fused attention map that includes both key features and complementary features;
step 4: constructing a counterfactual attention profile from the attention profile a obtained in step 2:
Masking key areas in the attention map A to obtain a mask mapIn->The position of the key area has been blocked by +.>To construct a counterfactual attention seeking->;
Step 5: converting the feature map into feature vectors:
converting the attention force diagram, the fusion attention force diagram and the inverse fact attention diagram obtained in the step 2, the step 3 and the step 4 into feature matrixes respectively; after the corresponding feature matrix is obtained, the corresponding feature matrix is converted into a feature vector through a full connection layer;
step 6: calculating loss:
calculating loss according to the feature vector obtained in the step 5, and optimizing the model;
and (5) repeating the training steps 2-6.
2. The fine-grained image recognition method based on attention interaction and anti-facts attention according to claim 1, wherein in step 2, the feature map F obtained through step 1 is input into an attention mechanism module to obtain attention force, and the attention mechanism module comprises a channel attention mechanism module and a spatial attention mechanism module, and the specific steps are as follows:
first, the feature diagram F is input into a channel attention mechanism module to obtain a channel attention diagram:
wherein Characteristic map of c-th channel, +.>Representing the feature vector of the c-th channel, z representing the feature vectors of all channels;
weighting the feature vector z to obtain a weight vector s:
wherein Representing ReLU activation function, +.>、/>Are all parameters, wherein-> ,/>R represents the super parameter of channel dimension reduction;
after the weight vector s is obtained, the feature map F and the weight vector s are fused to obtain a channel attention map:
wherein The representative weight vector s and the feature diagram F are multiplied by a channel level to obtain a channel attention diagram;
channel attention is soughtInput to the spatial attention module, capture attention in the spatial dimension, get attention strive a:
3. The fine-grained image recognition method based on attention interaction and anti-facts attention according to claim 1, wherein in step 4, the specific steps of constructing an anti-facts attention map are as follows:
wherein Representing the index of attention force diagram A in spatial position +.>Position-corresponding value,/->For a set threshold value, ifThe value of (2) is greater than a threshold +.>The value of the corresponding position is multiplied by the suppression factor +.>Shielding, inhibiting factor->Is a super parameter; if->The value of (2) is less than or equal to the threshold +.>The value of the corresponding position is unchanged;
by the above method, a mask pattern is obtainedIn->The position of the key region has been blocked byTo construct a counterfactual attention seeking->:
Wherein random (a) represents generating a corresponding random feature map from the attention map a, and random_map represents the random feature map, representing that in the feature map random_map, the critical area and the non-critical area are random;
after obtaining random feature map random_map, combining random_map withMultiplication results in a counterfactual attention attempt:
4. The fine-grained image recognition method based on attention interaction and anti-facts attention according to claim 3, wherein in step 5, the specific step of converting the feature map into feature vectors is as follows:
converting the attention force diagram, the complementary attention force diagram and the inverse fact force diagram obtained in the step 2, the step 3 and the step 4 into feature matrixes respectively:
wherein feature_matrix represents the feature matrix of attention diagram A, feature_complex_matrix represents the fused attention diagramFeature_counter_matrix representing the inverse fact strive for +.>Normal () represents the normalization operation and einsurm () represents the attention-seeking diagram a, the complementary attention-seeking diagram, the inverse fact-seeking diagram +.>Multiplying the characteristic diagram F and converting the characteristic diagram F into a characteristic matrix;
after the corresponding feature matrix is obtained, the corresponding feature matrix is converted into feature vectors through a full connection layer:
5. The fine-grained image recognition method based on attention interaction and anti-facts attention according to claim 4, wherein in step 6, the loss function is divided into two parts to optimize the model, firstly, a sorting loss function is introduced to effectively utilize key features and fusion features to participate in recognition together, and a calculation formula is as follows:
wherein Is the parameter of the ultrasonic wave to be used as the ultrasonic wave,/>represents the ordering penalty by ordering penalty->The model can promote->Priority of (i.e.)>;
Secondly, a cross entropy loss function is introduced to optimize the model, and a calculation formula is as follows:
wherein ,representing a cross entropy loss function, ">Representing loss of attention seeking, ->Representing loss of fusion attention map, ++>Loss representing the difference between an attention attempt and a counterfacts attention attempt, ++>Representing a transpose of the real label vector;
combining the above-mentioned loss functions into an overall loss function L:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310212744.9A CN116051948B (en) | 2023-03-08 | 2023-03-08 | Fine granularity image recognition method based on attention interaction and anti-facts attention |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310212744.9A CN116051948B (en) | 2023-03-08 | 2023-03-08 | Fine granularity image recognition method based on attention interaction and anti-facts attention |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116051948A true CN116051948A (en) | 2023-05-02 |
CN116051948B CN116051948B (en) | 2023-06-23 |
Family
ID=86123960
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310212744.9A Active CN116051948B (en) | 2023-03-08 | 2023-03-08 | Fine granularity image recognition method based on attention interaction and anti-facts attention |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116051948B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116228749A (en) * | 2023-05-04 | 2023-06-06 | 昆山润石智能科技有限公司 | Wafer defect detection method and system based on inverse fact interpretation |
CN116665019A (en) * | 2023-07-31 | 2023-08-29 | 山东交通学院 | Multi-axis interaction multi-dimensional attention network for vehicle re-identification |
CN117078920A (en) * | 2023-10-16 | 2023-11-17 | 昆明理工大学 | Infrared-visible light target detection method based on deformable attention mechanism |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111325237A (en) * | 2020-01-21 | 2020-06-23 | 中国科学院深圳先进技术研究院 | Image identification method based on attention interaction mechanism |
US20210133479A1 (en) * | 2019-11-05 | 2021-05-06 | Beijing University Of Posts And Telecommunications | Fine-grained image recognition method, electronic device and storage medium |
CN113592023A (en) * | 2021-08-11 | 2021-11-02 | 杭州电子科技大学 | High-efficiency fine-grained image classification model based on depth model framework |
CN113642571A (en) * | 2021-07-12 | 2021-11-12 | 中国海洋大学 | Fine-grained image identification method based on saliency attention mechanism |
CN114882534A (en) * | 2022-05-31 | 2022-08-09 | 合肥工业大学 | Pedestrian re-identification method, system and medium based on counterfactual attention learning |
-
2023
- 2023-03-08 CN CN202310212744.9A patent/CN116051948B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210133479A1 (en) * | 2019-11-05 | 2021-05-06 | Beijing University Of Posts And Telecommunications | Fine-grained image recognition method, electronic device and storage medium |
CN111325237A (en) * | 2020-01-21 | 2020-06-23 | 中国科学院深圳先进技术研究院 | Image identification method based on attention interaction mechanism |
CN113642571A (en) * | 2021-07-12 | 2021-11-12 | 中国海洋大学 | Fine-grained image identification method based on saliency attention mechanism |
CN113592023A (en) * | 2021-08-11 | 2021-11-02 | 杭州电子科技大学 | High-efficiency fine-grained image classification model based on depth model framework |
CN114882534A (en) * | 2022-05-31 | 2022-08-09 | 合肥工业大学 | Pedestrian re-identification method, system and medium based on counterfactual attention learning |
Non-Patent Citations (2)
Title |
---|
YONGMING RAO ET AL.: "Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification", 《2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 * |
马瑶: "CNN和Transformer在细粒度图像识别中的应用综述", 《计算机工程与应用》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116228749A (en) * | 2023-05-04 | 2023-06-06 | 昆山润石智能科技有限公司 | Wafer defect detection method and system based on inverse fact interpretation |
CN116228749B (en) * | 2023-05-04 | 2023-10-27 | 昆山润石智能科技有限公司 | Wafer defect detection method and system based on inverse fact interpretation |
CN116665019A (en) * | 2023-07-31 | 2023-08-29 | 山东交通学院 | Multi-axis interaction multi-dimensional attention network for vehicle re-identification |
CN116665019B (en) * | 2023-07-31 | 2023-09-29 | 山东交通学院 | Multi-axis interaction multi-dimensional attention network for vehicle re-identification |
CN117078920A (en) * | 2023-10-16 | 2023-11-17 | 昆明理工大学 | Infrared-visible light target detection method based on deformable attention mechanism |
CN117078920B (en) * | 2023-10-16 | 2024-01-23 | 昆明理工大学 | Infrared-visible light target detection method based on deformable attention mechanism |
Also Published As
Publication number | Publication date |
---|---|
CN116051948B (en) | 2023-06-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109949317B (en) | Semi-supervised image example segmentation method based on gradual confrontation learning | |
CN116051948B (en) | Fine granularity image recognition method based on attention interaction and anti-facts attention | |
Zhang et al. | PC-RGNN: Point cloud completion and graph neural network for 3D object detection | |
Ren et al. | Salient object detection by fusing local and global contexts | |
Li et al. | Adaptive deep convolutional neural networks for scene-specific object detection | |
Li et al. | Detection-friendly dehazing: Object detection in real-world hazy scenes | |
CN113609896B (en) | Object-level remote sensing change detection method and system based on dual-related attention | |
Gao et al. | Multi‐dimensional data modelling of video image action recognition and motion capture in deep learning framework | |
Tan et al. | Fine-grained classification via hierarchical bilinear pooling with aggregated slack mask | |
CN114758288A (en) | Power distribution network engineering safety control detection method and device | |
CN111368637B (en) | Transfer robot target identification method based on multi-mask convolutional neural network | |
Wang et al. | Multiscale deep alternative neural network for large-scale video classification | |
CN116342601B (en) | Image tampering detection method based on edge guidance and multi-level search | |
CN116311353A (en) | Intensive pedestrian multi-target tracking method based on feature fusion, computer equipment and storage medium | |
Yuan et al. | Few-shot scene classification with multi-attention deepemd network in remote sensing | |
Alsanad et al. | Real-time fuel truck detection algorithm based on deep convolutional neural network | |
Yu et al. | The multi-level classification and regression network for visual tracking via residual channel attention | |
CN116645694A (en) | Text-target retrieval method based on dynamic self-evolution information extraction and alignment | |
CN117593794A (en) | Improved YOLOv7-tiny model and human face detection method and system based on model | |
Yang et al. | An effective and lightweight hybrid network for object detection in remote sensing images | |
CN111444913A (en) | License plate real-time detection method based on edge-guided sparse attention mechanism | |
CN112487927B (en) | Method and system for realizing indoor scene recognition based on object associated attention | |
Liu et al. | Adversarial erasing attention for person re-identification in camera networks under complex environments | |
Zhang et al. | A review of small target detection based on deep learning | |
CN112668643A (en) | Semi-supervised significance detection method based on lattice tower rule |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |