CN117893561A - Infrared tiny target detection algorithm based on local contrast computing method - Google Patents

Infrared tiny target detection algorithm based on local contrast computing method Download PDF

Info

Publication number
CN117893561A
CN117893561A CN202410288616.7A CN202410288616A CN117893561A CN 117893561 A CN117893561 A CN 117893561A CN 202410288616 A CN202410288616 A CN 202410288616A CN 117893561 A CN117893561 A CN 117893561A
Authority
CN
China
Prior art keywords
module
layer
convolution
features
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410288616.7A
Other languages
Chinese (zh)
Other versions
CN117893561B (en
Inventor
刘晋源
陈子航
仲维
姜智颖
刘日升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202410288616.7A priority Critical patent/CN117893561B/en
Publication of CN117893561A publication Critical patent/CN117893561A/en
Application granted granted Critical
Publication of CN117893561B publication Critical patent/CN117893561B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention belongs to the field of image processing and computer vision, and relates to an infrared tiny target detection algorithm based on a local contrast computing method. The invention improves the existing local contrast method, provides a novel local contrast calculating method, designs an infrared small target detection method which is efficient and can be deployed on an edge platform, and provides a deformable attention module for separating obvious features and representative features and reducing the complexity of calculation by researching the local contrast characteristics. Furthermore, to aggregate global and local dependencies, the present invention proposes cross-aggregation that combines global processing modules and convolution modules with supervision of target edges. The invention can efficiently and rapidly realize the infrared tiny target detection function, and is an efficient and rapid infrared tiny target detection algorithm combining the advantages of the traditional local contrast method and the deep learning characteristic processing.

Description

Infrared tiny target detection algorithm based on local contrast computing method
Technical Field
The invention belongs to the field of image processing and computer vision, and relates to an infrared tiny target detection algorithm based on a local contrast computing method.
Background
With the continued advancement of computer vision technology, a vast amount of visual information is actively acquired, transmitted, and analyzed. One of the current focus of research is how to make computers efficiently process these video data. In the last century, infrared imaging systems have been widely used in the military field. Compared with a radar system, the infrared imaging system adopts a passive detection mode, so that the target can be detected and the concealment of the infrared imaging system can be kept. Compared with a visible light system, the infrared detection technology has the characteristics of strong penetrating power, long imaging distance, excellent anti-interference performance and the like, and plays an important role in early warning, detection and the like. With the development of the age, infrared systems are widely used in the non-military fields such as medical imaging, traffic management, marine search and rescue, and the like. In infrared imaging systems, the accuracy of target detection and tracking has been a key factor in system evaluation. The infrared tiny target detection is used as one of key technologies in the field, and can detect an abnormally small target in an early stage, thereby being beneficial to early warning of potential danger and taking corresponding countermeasures. Therefore, accurate detection of fine targets in infrared images is a popular problem to be solved in the current computer vision and image processing fields.
Compared with the target to be detected which is handled by the general target detection model, the infrared tiny target has a series of characteristics: background noise and clutter are more in the infrared image, so that low contrast ratio and low signal to noise ratio are caused, a target is easy to submerge in a complex background, and a general target detection model is difficult to effectively detect; because the infrared imaging system is far away from the target, the infrared target usually occupies more than ten pixels, sometimes even only one to two pixels, and the general target detection model is difficult to acquire information such as shape, texture, spatial structure and the like from the infrared imaging system for auxiliary detection; the types, shapes and sizes of different targets are greatly changed in different scenes and conditions, and the use of a general target detection model easily causes higher false alarm rate and reduces the detection performance of the model; because the application scene of the infrared imaging system generally requires real-time detection, the inference speed of the infrared fine target detection model is high. However, the current infrared fine target detection model mainly pursues high accuracy, but ignores real-time.
Thus, according to the above analysis, infrared fine target detection faces a number of challenges. On one hand, the thought of a general target detection model based on convolutional neural network and deep learning is difficult to be applied to infrared tiny target detection; on the other hand, the existing infrared tiny target model still has a plurality of defects in terms of target accuracy and reasoning instantaneity. Therefore, it is important to develop a real-time, efficient, infrared small target detection method that can be deployed on an edge platform.
Disclosure of Invention
The invention provides an infrared tiny target detection algorithm of a local contrast computing method, which is characterized in that a model is deployed on an edge computing platform, the edge computing platform acquires single-frame or batch infrared images by using infrared imaging equipment or remote uploading images, the infrared images are detected through the model, an infrared tiny target detection image is acquired, the detection image is subjected to post-processing, and finally the detection image is output to corresponding display equipment or returned to a remote platform.
The technical scheme of the invention is as follows:
an infrared tiny target detection algorithm of a local contrast computing method comprises the following steps:
1) The edge computing platform acquires single-frame or batch infrared images by using an infrared imaging device or a remote uploading image, and preprocesses the infrared images.
2) Firstly, the model calculates an edge characteristic diagram of the infrared image, and then the infrared image and the edge characteristic diagram are respectively put into a main branch and an edge branch of the model. In the main branch, the infrared image features sequentially pass through the encoder part and the decoder part to obtain an image main branch feature map.
For the encoder part, the method comprises a basic downsampling processing module, a plurality of convolution processing modules based on a local contrast computing method and a downsampling global information processing module, wherein an infrared image firstly generates infrared image representation features through the basic downsampling processing module, then the infrared image representation features are processed through the convolution processing modules, the first layer convolution processing module performs downsampling and enhancement processing on the infrared image representation features through the convolution processing modules based on the local contrast computing method to obtain lower-level features, the second layer convolution processing module processes the upper-layer lower-level features to obtain lower-level features, meanwhile downsampling is performed on the infrared image representation features, and the global image features are obtained by combining the infrared image representation features with the image features of the convolution processing module part.
And the decoder part comprises a plurality of decoding modules and a feature fusion module, wherein the first layer decoding module carries out up-sampling on the global features by using a deconvolution module to obtain high-level features, the second layer decoding module above carries out processing on the high-level features of the upper layer to obtain the high-level features of the lower layer, and then the low-level features of the convolution processing module of the same layer in the coding module and the current high-level features are fused by the feature fusion module to obtain refined high-level features. Finally, the image main branch feature map is obtained through the series of decoding modules.
3) In the edge branching, image edge characteristics are embedded into a representation processing module and a plurality of edge processing modules to obtain an image edge branching characteristic diagram. The specific process is as follows: the edge characteristic map firstly acquires edge representation characteristics through an embedded representation processing module, then the edge representation characteristics are processed through an edge processing module, wherein each layer of edge representation characteristics or refined edge characteristics of the upper layer are processed through a convolution layer and are combined with the same-layer advanced characteristics in the main branch, and the refined edge characteristics are acquired through a gating module. And finally, outputting an image edge branch characteristic diagram through the thinned edge characteristic.
4) And finally, combining the feature images output by the main branch and the edge branch by the model, generating a final infrared tiny target detection image by the detection module, performing post-processing on the detection image, and outputting the processed single or batch detection images to corresponding display equipment or returning to a remote platform.
The invention has the beneficial effects that: the invention provides an infrared small target detection method based on a local contrast calculation method, which is designed, and the background area is determined according to the current image space characteristic information, so that the background and the target are separated more accurately, and the model overcomes the difficulties of dim infrared small target and low signal-to-noise ratio, so that the effect on the local contrast is enhanced, and the performance of the model is improved. Meanwhile, the calculated amount is ensured to be smaller, and the balance between the calculated amount and the performance of the model is realized. And the model is deployed to an edge platform, so that infrared small target detection under a real edge scene is realized.
Drawings
FIG. 1 is a basic flow chart of an infrared fine target detection algorithm deployed on an edge platform.
Fig. 2 is a detailed flow chart of the present invention.
Fig. 3 is a local contrast plot.
Fig. 4 is a partial contrast calculation explanatory diagram.
FIG. 5 shows the effect of detection on ISRSTD-1k dataset.
Detailed Description
The following describes the embodiments of the present invention further with reference to the drawings and technical schemes.
The invention provides an infrared small target detection method based on a local contrast computing method, wherein the basic flow is shown in figure 1, and the method specifically comprises the following steps:
1) Overall flow of model: first model pairs of infrared imagesSolving for image edge feature using Sobel operator>Then the infrared image is +.>Image edge feature->And respectively placing the two parts of feature images into a main branch and an edge branch, obtaining feature images output by the main branch and the edge branch, and generating a final infrared tiny target detection image by the two parts of feature images, wherein the detailed flow is shown in figure 2.
2) The main process of the encoder is as follows: image featuresFirstly, global image features are obtained through a basic downsampling processing module, a plurality of convolution processing modules based on a local contrast computing method and a downsampled global information processing module. The method comprises the following steps:
2-1) downsampling the image by three convolutional layers and one max-pooling layer for the base downsampling process module. Output characteristicsThe definition is as follows: />Wherein->And->Three convolution layers and a maximum pooling layer are shown, respectively. Then, output characteristics->Target feature enhancement and nonlinear transformation are carried out based on a local contrast computing module, so that features with less noise and clutter are obtained>、/>And->
2-2) convolution processing based on a local contrast computing method, which comprises the following specific procedures: for image featuresWherein->The number of channels is characteristic, +.>And->For the length and width of the feature, the convolution processing module comprises a convolution layer, a batch normalization layer, a RELU layer and residual processing, wherein the later convolution processing module is also provided with a channel attention layer and a space attention layer based on a local contrast computing method, and the definition is as follows:wherein->、/>And->Indicating the convolution layer, the batch normalization layer and the RELU layer, respectively,>and->The spatial attention layer based on the local contrast computing method and the channel attention layer are respectively +.>For nonlinear treatment, ++>For Sigmoid function, ++>For multiplication of element ranges, ++>And->For maximum pooling layer and average pooling layer,/i>Is an intermediate feature processed by the two-layer convolution processing module in the first two times. About->The specific method is as follows: which generates a local contrast attention map by local contrast calculation with a deformable convolution module having predetermined convolution kernel parameters and a convolution kernel size of 3 x 3>Then, two convolution layers are processed in a nonlinear way, and finally, the characteristic is treated by a Sigmoid function>Adding local contrast to enhance the target feature, namely:wherein->The deformable convolution module is provided with the predetermined convolution kernel parameters, and the intermediate result is obtained after the deformable convolution module is provided with the intermediate result ∈>,/>Is->Features in each of the channels. />For the minimum value in each spatial position. The characteristics with enhanced local contrast are finally obtained through the processing. Setting rollThe principal flow of the product parameters to calculate local contrast is shown in FIG. 4, and the calculation process is as follows: first, the module passes the original feature->Get score feature->The specific way is to input the characteristic +.>The method comprises the steps of respectively calculating an average value and a maximum value, then carrying out weighted calculation, wherein the weighted coefficient is a learning parameter, and then using a specific deformable convolution module for processing, and the specific calculation method is as follows: />Wherein->For a particular deformable convolution module, specifically set to: the output channels are 4, the convolution kernel size is 3×3, and one direction is set in each of the 4 output channels, the diagonal weights in the direction are 1, and the rest area weights are 0. To ensure that 4 diagonal directions are +_ in 3 x 3 two-dimensional space>Local contrast is calculated with the center position, as shown in particular in fig. 3. At the same time for->And->All carry out convolution operations at the same position, so that the model can learn the offset and the modulation quantity only by the method of +.>And (5) calculating to obtain the product.
2-3) regarding the global information processing module, first, the feature isObtaining characteristic by downsampling>Then the global information processing module is used for obtaining the global feature ∈>The global information processing module consists of a multi-head self-attention module and a multi-layer perceptron module, and the main process is as follows: assume that the current model is co-extensive->Layer module is built up for the current +.>Layer module (S)>1/>Then->Vector sequence of layers->The method comprises the following steps: />Wherein->For intermediate calculation result, ++>Is->Vector sequence of layers, ">Is a layerNormalization module>For the multi-head attention module, < >>Is a multi-layer perceptron module. For the multi-head attention module, the detailed calculation method is as follows: />Wherein->For inputting features +.>For the output feature of the multi-head attention module, +.>In order to pay attention to the number of heads,feature transformation matrix, respectively->Is->A personal attention head output feature,/->Is the dimension length after feature conversion. />Is->Characteristics of the output connections of the individual attention heads, +.>To output a transformation matrix. For the multi-layer perceptron module, the calculation method is as follows: />Wherein->For the transformation matrix of the hidden layer +.>For the conversion matrix of the output layer, < > for>Is a gaussian error linear unit.
2-4) final encoder will final characteristics in convolution processing moduleFeatures of the global information processing module>Is connected and passed through a convolution linear transformation module to obtain the final global feature +.>The method comprises the following steps:wherein->For characteristic connection operations, ++>Is a convolution linear transformation module, in particular two convolution layers and +.>And a layer-bonded module.
3) For the decoder part, the model first characterizes the global featurePerforming deconvolution layer processing to double the feature size to obtain advanced feature +.>. The feature fusion module is used for fusing advanced features>And low-level features having the same size +.>Feature fusion is carried out, and finally refined features are obtained through processing of a convolution module>. The specific flow is as follows:wherein->For deconvolution lamination, < >>For feature fusion module->And->Respectively high-level features and low-level features. Regarding the feature fusion module, the procedure is described as follows:wherein->A representative bottleneck module is composed of two convolution modules with convolution kernels of 1×1, and is used for filtering high-frequency noise. />And->For transverse and vertical features->And->Represents the calculation of the horizontal and vertical attention, and the main mode is to use a deformable convolution module with convolution kernel of 1×3 and 3×1 to calculate, so as to realize the calculation of the horizontal and vertical attention. Similarly, for low-level features->And advanced features->Processing the characteristic fusion module to obtain characteristic +.>Wherein the characteristic->Is a main branch characteristic diagram.
4) Image edge featureFirst processed by the embedded representation processing module, which then matches the advanced features from the main branch encoder +.>Together through an edge processing module to extract edge features +.>. Likewise, edge features->And advanced features->Extracting edge features by an edge processing module>Edge features->And main branch feature map->Extracting edge branch feature map by edge processing module>The method comprises the steps of carrying out a first treatment on the surface of the Wherein the embedded representation processing module is a layer with the size of +.>Is a convolution layer of (a) and (b).
The edge processing module mainly comprises two convolution processing modules of a space attention layer and a channel attention layer, and calculates a final result in a Taylor finite difference-based calculation mode, wherein the specific calculation method comprises the following steps:wherein->To have a spatial attention layer->For convolution operation in spatial attention, +.>For gating convolution operation, a is the input feature, +.>The intermediate features processed by the first convolution processing module and b are output features.
5) Regarding the final detection module, the model will first model the main branch feature mapAnd edge branching feature map->Calculating the output characteristic thinned by the edge characteristic +.>The concrete calculation mode is as follows: />Output characteristics->And finally, outputting the target prediction graph through a segmentation head module. At the same time, the model also provides a profile of edge branching>The detection of the target contours is performed to enhance the model performance. The results of the model's detection of the IRSTD-1k dataset are shown in FIG. 5, and it can be seen that the infrared small target was accurately detected.

Claims (4)

1. An infrared tiny target detection algorithm of a local contrast calculation method is characterized by comprising the following steps:
1) The edge computing platform acquires single-frame or batch infrared images by using infrared imaging equipment or remote uploading images, and pre-processes the infrared images;
2) Firstly, a model calculates an edge feature map for an infrared image, and then the infrared image and the edge feature map are respectively put into a main branch and an edge branch of the model; in the main branch, the infrared image characteristic sequentially passes through an encoder part and a decoder part to obtain an image main branch characteristic diagram;
for the encoder part, the method comprises a downsampling processing module, a plurality of convolution processing modules based on a local contrast computing method and a downsampling global information processing module, wherein an infrared image firstly generates infrared image representation features through the downsampling processing module, then the infrared image representation features are processed through the convolution processing modules, the first layer convolution processing module performs downsampling and enhancement processing on the infrared image representation features through the convolution processing modules based on the local contrast computing method to obtain lower-level features, the second layer convolution processing module processes the upper-layer lower-level features to obtain lower-level features, simultaneously downsamples the infrared image representation features, and combines the infrared image representation features with the image features of the convolution processing module part to obtain global image features;
for the decoder part, the method comprises a plurality of decoding modules and a feature fusion module, wherein a first layer decoding module carries out up-sampling on global features by using a deconvolution module to obtain advanced features, a second layer decoding module above carries out processing on the advanced features of the upper layer to obtain advanced features of the lower layer, and then the lower-level features of a convolution processing module of the same layer in the coding module and the current advanced features are fused by the feature fusion module to obtain refined advanced features; finally, obtaining an image main branch feature map through a series of decoding modules;
3) In the edge branching, the image edge characteristic obtains an image edge branching characteristic diagram through embedding a representation processing module and a plurality of edge processing modules, and the specific process is as follows: the edge characteristic map firstly acquires edge representing characteristics through an embedded representing processing module, then the edge representing characteristics are processed through an edge processing module, wherein each layer of edge representing characteristics or refined edge characteristics of the upper layer are processed through a convolution layer and are combined with the same-layer advanced characteristics in the main branch, and the refined edge characteristics are acquired through a gating module; finally, outputting an image edge branch feature map through the thinned edge features;
4) The model finally combines the feature images output by the main branch and the edge branch, generates a final infrared tiny target detection image by a detection module, carries out post-processing on the detection image, and outputs the processed single or batch detection image to corresponding display equipment or returns to a remote platform;
in the step 2), the main process of the encoder is as follows: image featuresFirstly, global image features are obtained through a processing downsampling module, a plurality of convolution processing modules based on a local contrast computing method and a downsampling global information processing module; the method comprises the following steps:
2-1) processing the module by three convolutions for downsamplingThe layer and a maximum pooling layer are used for downsampling the image; output characteristicsThe definition is as follows: />Wherein->And->Respectively representing three convolution layers and a maximum pooling layer; then, output characteristics->Target feature enhancement and nonlinear transformation are performed based on a local contrast computing module to obtain features with less noise and clutter>And->The method comprises the steps of carrying out a first treatment on the surface of the 2-2) convolution processing based on a local contrast computing method, which comprises the following specific procedures: for image features->Wherein->The number of channels is characteristic, +.>And->For the length and width of the feature, the convolution processing module comprises a convolution layer, a batch normalization layer, a RELU layer and residual processing through a twice two-layer convolution processing module,the latter convolution processing module is also provided with a channel attention layer and a space attention layer based on a local contrast computing method, and the definition is as follows: />Wherein->And->Indicating the convolution layer, the batch normalization layer and the RELU layer, respectively,>and->The spatial attention layer based on the local contrast computing method and the channel attention layer are respectively +.>For nonlinear treatment, ++>As a function of the Sigmoid,for multiplication of element ranges, ++>And->For maximum pooling layer and average pooling layer,/i>For passing through the first two layersThe middle features are processed by a convolution processing module; about->The specific method comprises the following steps: local contrast calculation by a deformable convolution module with predetermined convolution kernel parameters and convolution kernel size of 3×3, to generate local contrast attention mapThen, two convolution layers are processed in a nonlinear way, and finally, the characteristic is treated by a Sigmoid function>Adding local contrast to enhance the target feature, namely: />Wherein->The deformable convolution module is provided with the predetermined convolution kernel parameters, and the intermediate result is obtained after the deformable convolution module is provided with the intermediate result ∈>,/>Is thatFeatures in each of the channels; />For the minimum value in each spatial position; the above treatment results in a local contrast enhanced feature ∈>The method comprises the steps of carrying out a first treatment on the surface of the The convolution kernel parameters are set to calculate the local contrast, the calculation process is as follows: first, by the original feature->Get score feature->The specific way is to input the characteristic +.>The average value and the maximum value are calculated respectively, then a weighted calculation is carried out, the weighted coefficient is a learning parameter, and then the learning parameter is processed by using a deformable convolution module, and the specific calculation method is as follows: />Wherein->The deformable convolution module is specifically provided with: the output channels are 4, the convolution kernel size is 3 multiplied by 3, one direction is set in each of the 4 output channels, the diagonal weights in the direction are 1, and the weights of the other areas are 0; to ensure that 4 diagonal directions are +_ in 3 x 3 two-dimensional space>Calculating local contrast with the center position; at the same time for->And->All performing convolution operations at the same position, so that the offset and modulation amount can be learned only by +.>Calculating to obtain the product;
2-3) regarding the global information processing module, first, the feature isObtaining characteristic by downsampling>Then the global information processing module is used for obtaining the global feature ∈>The global information processing module consists of a multi-head self-attention module and a multi-layer perceptron module, and the main process is as follows: assume that the current model is co-extensive->Layer building, for the current +.>Layer module (S)>1/>Then->Vector sequence of layers->The method comprises the following steps: />Wherein->For intermediate calculation result, ++>Is->Vector sequence of layers, ">For layer normalization module, < >>For the multi-head attention module, < >>Is a multi-layer perceptron module; for the multi-head attention module, the calculation method is as follows: />Wherein->For inputting features +.>For the output feature of the multi-head attention module, +.>For the number of attention heads>Feature transformation matrix, respectively->Is->A personal attention head output feature,/->The dimension length after feature conversion;is->Characteristics of the output connections of the individual attention heads, +.>A conversion matrix is output; for the multi-layer perceptron module, the calculation method is as follows: />Wherein->For the transformation matrix of the hidden layer +.>For the conversion matrix of the output layer, < > for>Is a Gaussian error linear unit;
2-4) final encoder will final characteristics in convolution processing moduleFeatures of the global information processing module>Is connected and passed through a convolution linear transformation module to obtain the final global feature +.>The method comprises the following steps: />Wherein the method comprises the steps ofIs a feature connection operation; />Is a convolution linear transformation moduleFor two convolution layers and +.>And a layer-bonded module.
2. The method of claim 1, wherein in step 2), the model first characterizes the global feature for the decoder portionBy deconvolution layer processing, the feature size is doubled to obtain advanced feature +.>The method comprises the steps of carrying out a first treatment on the surface of the The feature fusion module is used for fusing advanced features>And low-level features having the same size +.>Feature fusion is carried out, and finally refined features are obtained through processing of a convolution module>The method comprises the steps of carrying out a first treatment on the surface of the The specific flow is as follows: />Wherein->For deconvolution lamination, < >>For feature fusion module->And->Respectively high-level features and low-level features; regarding the feature fusion module, the process is described as follows:wherein->A representative bottleneck module, which is composed of two convolution modules with convolution kernels of 1×1, and is used for filtering high-frequency noise; />And->For transverse and vertical features->And->The main mode is to calculate by using a deformable convolution module with convolution kernel of 1 x 3 and 3 x 1, so as to realize the calculation of the horizontal and vertical attentions; also for low-level features->And advanced features->The characteristic fusion module processes the signals to obtain the characteristic +.>Wherein the characteristic->Is a main branch feature map.
3. According to claim 1The infrared tiny target detection algorithm of the local contrast computing method is characterized in that in the step 3), the image edge characteristics are as followsFirst processed by the embedded representation processing module and then processed by the embedded representation processing module with the high-level features from the main branch encoder>Together through an edge processing module to extract edge features +.>The method comprises the steps of carrying out a first treatment on the surface of the Likewise, edge featuresAnd advanced features->Extracting edge features by an edge processing module>Edge features->And main branch feature map->Extracting edge branch feature map by edge processing module>The method comprises the steps of carrying out a first treatment on the surface of the Wherein the embedded representation processing module is a layer with the size of +.>Is a convolution layer of (2);
the edge processing module mainly comprises a convolution processing module of two space attention layers and a channel attention layer, and simultaneously passes through a finite difference based on TaylorThe final result is calculated by a calculation mode, and the specific calculation method comprises the following steps:wherein->To have a spatial attention layer->For convolution operation in spatial attention, +.>For gating convolution operation, a is the input feature, +.>The intermediate features processed by the first convolution processing module and b are output features.
4. The method of claim 1, wherein in the step 4), the main branch feature map is modeled first with respect to the detection moduleAnd edge branching feature map->Calculating the output characteristic thinned by the edge characteristic +.>The specific calculation mode is as follows: />Output characteristics->Finally, the object is output through a dividing head moduleMarking a prediction graph; at the same time, the model also provides a profile of edge branching>The detection of the target contours is performed to enhance the model performance.
CN202410288616.7A 2024-03-14 2024-03-14 Infrared tiny target detection algorithm based on local contrast computing method Active CN117893561B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410288616.7A CN117893561B (en) 2024-03-14 2024-03-14 Infrared tiny target detection algorithm based on local contrast computing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410288616.7A CN117893561B (en) 2024-03-14 2024-03-14 Infrared tiny target detection algorithm based on local contrast computing method

Publications (2)

Publication Number Publication Date
CN117893561A true CN117893561A (en) 2024-04-16
CN117893561B CN117893561B (en) 2024-06-07

Family

ID=90643032

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410288616.7A Active CN117893561B (en) 2024-03-14 2024-03-14 Infrared tiny target detection algorithm based on local contrast computing method

Country Status (1)

Country Link
CN (1) CN117893561B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112800932A (en) * 2021-01-25 2021-05-14 上海海事大学 Method for detecting obvious ship target in marine background and electronic equipment
CN113343789A (en) * 2021-05-20 2021-09-03 武汉大学 High-resolution remote sensing image land cover classification method based on local detail enhancement and edge constraint
CN114862844A (en) * 2022-06-13 2022-08-05 合肥工业大学 Infrared small target detection method based on feature fusion
CN114998736A (en) * 2022-06-07 2022-09-02 中国人民解放军国防科技大学 Infrared weak and small target detection method and device, computer equipment and storage medium
CN115187856A (en) * 2022-06-10 2022-10-14 电子科技大学 SAR image ship detection method based on human eye vision attention mechanism
CN115527098A (en) * 2022-11-09 2022-12-27 电子科技大学 Infrared small target detection method based on global mean contrast space attention
CN115861380A (en) * 2023-02-16 2023-03-28 深圳市瓴鹰智能科技有限公司 End-to-end unmanned aerial vehicle visual target tracking method and device in foggy low-light scene
CN116129289A (en) * 2023-03-06 2023-05-16 江西理工大学 Attention edge interaction optical remote sensing image saliency target detection method
CN116468980A (en) * 2023-03-31 2023-07-21 中国人民解放军国防科技大学 Infrared small target detection method and device for deep fusion of edge details and deep features
CN116524312A (en) * 2023-04-28 2023-08-01 中国人民解放军国防科技大学 Infrared small target detection method based on attention fusion characteristic pyramid network
CN116645696A (en) * 2023-05-31 2023-08-25 长春理工大学重庆研究院 Contour information guiding feature detection method for multi-mode pedestrian detection
CN116863305A (en) * 2023-07-13 2023-10-10 天津大学 Infrared dim target detection method based on space-time feature fusion network
CN116994000A (en) * 2023-07-28 2023-11-03 五邑大学 Part edge feature extraction method and device, electronic equipment and storage medium
CN117115575A (en) * 2023-09-15 2023-11-24 中国科学院光电技术研究所 Improved RPCA infrared small target detection method based on scale space theory

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112800932A (en) * 2021-01-25 2021-05-14 上海海事大学 Method for detecting obvious ship target in marine background and electronic equipment
CN113343789A (en) * 2021-05-20 2021-09-03 武汉大学 High-resolution remote sensing image land cover classification method based on local detail enhancement and edge constraint
CN114998736A (en) * 2022-06-07 2022-09-02 中国人民解放军国防科技大学 Infrared weak and small target detection method and device, computer equipment and storage medium
CN115187856A (en) * 2022-06-10 2022-10-14 电子科技大学 SAR image ship detection method based on human eye vision attention mechanism
CN114862844A (en) * 2022-06-13 2022-08-05 合肥工业大学 Infrared small target detection method based on feature fusion
CN115527098A (en) * 2022-11-09 2022-12-27 电子科技大学 Infrared small target detection method based on global mean contrast space attention
CN115861380A (en) * 2023-02-16 2023-03-28 深圳市瓴鹰智能科技有限公司 End-to-end unmanned aerial vehicle visual target tracking method and device in foggy low-light scene
CN116129289A (en) * 2023-03-06 2023-05-16 江西理工大学 Attention edge interaction optical remote sensing image saliency target detection method
CN116468980A (en) * 2023-03-31 2023-07-21 中国人民解放军国防科技大学 Infrared small target detection method and device for deep fusion of edge details and deep features
CN116524312A (en) * 2023-04-28 2023-08-01 中国人民解放军国防科技大学 Infrared small target detection method based on attention fusion characteristic pyramid network
CN116645696A (en) * 2023-05-31 2023-08-25 长春理工大学重庆研究院 Contour information guiding feature detection method for multi-mode pedestrian detection
CN116863305A (en) * 2023-07-13 2023-10-10 天津大学 Infrared dim target detection method based on space-time feature fusion network
CN116994000A (en) * 2023-07-28 2023-11-03 五邑大学 Part edge feature extraction method and device, electronic equipment and storage medium
CN117115575A (en) * 2023-09-15 2023-11-24 中国科学院光电技术研究所 Improved RPCA infrared small target detection method based on scale space theory

Non-Patent Citations (12)

* Cited by examiner, † Cited by third party
Title
DI WANG, AT EL.: "An interactively reinforced paradigm for ioint infrared-visible image fusion and saliency object detection", INFORMATION FUSION, vol. 98, 31 October 2023 (2023-10-31) *
ZHU LIU, AT EL.: "Enhancing Infrared Small Target Detection Robustness with Bi-Level Adversarial Framework", ARXIV, 3 September 2023 (2023-09-03) *
戴一冕: "基于低秩稀疏分解与注意力机制的红外小目标检测", 中国博士电子期刊网, no. 2, 15 February 2023 (2023-02-15) *
李道纪;郭海涛;卢俊;赵传;林雨准;余东行;: "遥感影像地物分类多注意力融和U型网络法", 测绘学报, no. 08, 15 August 2020 (2020-08-15) *
林川;曹以隽;: "基于深度学习的轮廓检测算法:综述", 广西科技大学学报, no. 02, 15 April 2019 (2019-04-15) *
王帅;程洪玮;莫邵文;曾剑;江丹;: "基于边缘提取的局部对比红外弱小目标检测改进算法", 数字技术与应用, no. 08, 15 August 2016 (2016-08-15) *
王新潮: "基于深度学习的伪装目标检测及军用迷彩检测算法研究", 中国优秀硕士电子期刊网, no. 2, 15 February 2024 (2024-02-15) *
王超: "基于双流卷积神经网络的光学遥感图像显著目标检测研究", 万方数据, 8 October 2022 (2022-10-08) *
王鹤;辛云宏;: "基于双树复小波变换的红外小目标检测算法", 激光与红外, no. 09, 20 September 2020 (2020-09-20) *
赵杨: "基于YOLOv5的自动驾驶场景目标检测算法研究", 中国优秀硕士电子期刊网, no. 02, 15 February 2024 (2024-02-15) *
陈凯;王永雄;: "结合空间注意力多层特征融合显著性检测", 中国图象图形学报, no. 06, 16 June 2020 (2020-06-16) *
陈琴;朱磊;后云龙;邓慧萍;吴谨;: "基于深度中心邻域金字塔结构的显著目标检测", 模式识别与人工智能, no. 06, 15 June 2020 (2020-06-15) *

Also Published As

Publication number Publication date
CN117893561B (en) 2024-06-07

Similar Documents

Publication Publication Date Title
CN111259850B (en) Pedestrian re-identification method integrating random batch mask and multi-scale representation learning
CN113065558B (en) Lightweight small target detection method combined with attention mechanism
CN111401201B (en) Aerial image multi-scale target detection method based on spatial pyramid attention drive
CN110378381B (en) Object detection method, device and computer storage medium
CN112507997B (en) Face super-resolution system based on multi-scale convolution and receptive field feature fusion
Zhang et al. A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application
CN112488210A (en) Three-dimensional point cloud automatic classification method based on graph convolution neural network
Fang et al. Infrared small UAV target detection based on residual image prediction via global and local dilated residual networks
Jiang et al. A semisupervised Siamese network for efficient change detection in heterogeneous remote sensing images
Kim et al. GAN-based synthetic data augmentation for infrared small target detection
CN113591968A (en) Infrared weak and small target detection method based on asymmetric attention feature fusion
Chang et al. AFT: Adaptive fusion transformer for visible and infrared images
CN117237740B (en) SAR image classification method based on CNN and Transformer
Li et al. ConvTransNet: A CNN–transformer network for change detection with multiscale global–local representations
CN115861756A (en) Earth background small target identification method based on cascade combination network
CN116188944A (en) Infrared dim target detection method based on Swin-transducer and multi-scale feature fusion
CN117391938B (en) Infrared image super-resolution reconstruction method, system, equipment and terminal
Zuo et al. A remote sensing image semantic segmentation method by combining deformable convolution with conditional random fields
CN114842196A (en) Radar radio frequency image target detection method
CN112800932B (en) Method for detecting remarkable ship target in offshore background and electronic equipment
CN112329662B (en) Multi-view saliency estimation method based on unsupervised learning
CN117409244A (en) SCKConv multi-scale feature fusion enhanced low-illumination small target detection method
Zhao et al. Deep learning-based laser and infrared composite imaging for armor target identification and segmentation in complex battlefield environments
Albalooshi et al. Deep belief active contours (DBAC) with its application to oil spill segmentation from remotely sensed sea surface imagery
CN116091793A (en) Light field significance detection method based on optical flow fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant