CN113822352A - Infrared dim target detection method based on multi-feature fusion - Google Patents
Infrared dim target detection method based on multi-feature fusion Download PDFInfo
- Publication number
- CN113822352A CN113822352A CN202111078520.0A CN202111078520A CN113822352A CN 113822352 A CN113822352 A CN 113822352A CN 202111078520 A CN202111078520 A CN 202111078520A CN 113822352 A CN113822352 A CN 113822352A
- Authority
- CN
- China
- Prior art keywords
- target
- infrared
- similarity
- image
- gray
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 43
- 230000004927 fusion Effects 0.000 title claims abstract description 17
- 238000000034 method Methods 0.000 claims abstract description 29
- 230000001629 suppression Effects 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 12
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 230000002401 inhibitory effect Effects 0.000 claims description 2
- 230000005764 inhibitory process Effects 0.000 claims description 2
- 238000010276 construction Methods 0.000 claims 1
- 230000011218 segmentation Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an infrared small and weak target detection method based on multi-feature fusion. Secondly, detecting the target by calculating the distance between edge pixel points and central pixel points of the infrared dim target and the covariance of gray difference by utilizing the characteristic that the gray information of the infrared dim target accords with two-dimensional Gaussian distribution, and obtaining a first significant image. Thirdly, detecting the target by calculating the similarity factor by utilizing the characteristic that the infrared small target has low similarity with the neighborhood to obtain a second significant image. And finally, performing dot multiplication on the first significant map and the second significant map, obtaining a final significant map by fusing multiple characteristics of the infrared weak and small target, and calculating a simple threshold value of the final significant map and segmenting the final significant map to obtain a final detection result. The method effectively inhibits the complex clutter in the infrared image and simultaneously improves the precision of the detection of the weak and small targets.
Description
Technical Field
The invention relates to the field of infrared image target detection, in particular to an infrared small and weak target detection method based on multi-feature fusion.
Background
The infrared target detection is one of core technologies of systems such as remote early warning and accurate guidance, and the like, wherein the infrared weak and small target detection is a classical difficult problem. The main difficulties are as follows: 1) the imaging distance is far, and the target is not only lack of characteristic information in the image but also is easily confused with noise; 2) the clutter edges with high intensity are easily mistakenly detected as targets, so that the false alarm is increased; 3) weak and small objects tend to be submerged in a complex background, causing serious overloads. Therefore, infrared detection of small and weak targets in a complex background has been a very challenging task.
Currently, few studies are being made on the detection of infrared small and weak targets. The related infrared small target detection methods can be broadly divided into data-driven methods represented by deep learning and model-driven methods based on mathematical and physical knowledge modeling. The data driving method is less applied to actual engineering due to the problems of lack of infrared weak and small target image sets, high training overhead, poor real-time performance and the like. The model-driven infrared small target detection method can be further divided into a single-frame method and a multi-frame method. However, the multi-frame method has limited effect in infrared weak and small target detection because an accurate motion model of the target cannot be obtained. Therefore, a high-performance single-frame method is more important in infrared small target detection.
From the analysis of the aspect of processing thinking, the single-frame detection method of the infrared small target comprises three types: one type of method can be called as a background recovery method, and the idea is to recover the background based on a sparse matrix and a low-rank matrix and then subtract the background image from the original image so as to achieve the purpose of highlighting the target. The second type is a background suppression method, which mainly adopts a filtering technology to suppress a uniform background and clutter. However, the method is very sensitive to background edges, and the use of the method in complex scenes such as sky target detection is restricted. The third type is a new method for suppressing the background and enhancing the target, which mainly utilizes the local contrast sensitivity characteristic of human vision to enhance the target by extracting the local contrast between the target and the background. Most methods have good detection effect in a scene with stable background change, but when the methods are used for detecting infrared weak and small targets in a complex scene, the high-intensity clutter edges in the complex background cannot be effectively inhibited, so that the false alarm rate is high. Therefore, the infrared small target detection method under the complex background has a larger development space.
Disclosure of Invention
The invention provides an infrared dim target detection method based on multi-feature fusion, aiming at the problems of low detection rate and high false alarm rate of infrared dim targets under a complex background, which is used for detecting infrared image dim targets under the complex background and effectively improving the accuracy and robustness of detection.
The invention adopts the following technical scheme: the infrared dim target detection method based on multi-feature fusion comprises the following steps: firstly, based on the characteristic that the local gray value of an infrared weak target is large, the gray contrast ratio of the target and a neighborhood background is utilized to enhance a real target and inhibit a part of complex backgrounds; secondly, detecting the target by calculating the distance between edge pixel points and central pixel points of the infrared dim target and the covariance of gray difference by utilizing the characteristic that the gray information of the infrared dim target accords with two-dimensional Gaussian distribution to obtain a first significant image; thirdly, detecting the target by calculating a similarity factor by utilizing the characteristic that the similarity between the infrared weak small target and the neighborhood is low to obtain a second significant image; and finally, performing dot multiplication on the first significant map and the second significant map, obtaining a final significant map by fusing multiple characteristics of the infrared weak and small target, and segmenting a calculation threshold of the final significant map to obtain a final detection result.
The infrared dim target detection method based on multi-feature fusion utilizes the target and the neighborsThe method specifically comprises the following steps of enhancing a real target by the gray contrast of a domain background and inhibiting a part of complex backgrounds: to-be-detected infrared image IinConverting the image into a gray image I; constructing a sliding window by taking a pixel point S in a gray level image I as a center, wherein the center unit is S0The cells S around itiIs a local background region; taking a plurality of cells with the maximum gray average value in the local background area, and calculating the gray average value mu of the cellsmax(ii) a Comparing the mean gray level of the center cellAnd mumaxSize of (1), ifThe central unit pixel is enhanced and the grey value is set to be I (I, j) × (mu)0/μmax)2I (I, j) represents the pixel grey value at the (I, j) position in the central cell, otherwise the inhibition is performed, the grey value is set to 0; and moving the sliding window until all pixels of the gray-scale image I are subjected to the calculation to obtain a background suppression image T.
The method for detecting the infrared dim target based on the multi-feature fusion detects the target by calculating the distance between the edge pixel point and the center pixel point of the infrared dim target and the covariance of the gray level difference, and the step of obtaining the first significant image specifically comprises the following steps: constructing a sliding window by taking a pixel point T in the background suppression image T as a center, and if the pixel value in the sliding window is not 0, calculating the distance from each pixel in the sliding window to the center pixel and the gray difference, thereby calculating the covariance coefficient of the whole sliding window; the sliding window is moved until all pixels of the background suppression image T are subjected to the above calculation, and a covariance significance map COV composed of covariance coefficients of each sliding window is obtained.
The method for detecting the infrared dim target based on the multi-feature fusion detects the target by calculating the similarity factor by utilizing the characteristic that the infrared dim target has low similarity with the neighborhood, and the method for detecting the infrared dim target based on the multi-feature fusion specifically comprises the following steps of: suppressing an image in an image T with a backgroundConstructing a sliding window by taking a prime point p as a center, wherein the central unit is called SM0S background units SM around itijIs a local background region; separately calculate the background units SMijAnd a central unit SM0Similarity factor between SFi(ii) a Moving the sliding window until all pixels of the background suppression image T are subjected to the calculation to obtain S similarity matrixes, wherein the ith similarity matrix consists of similarity factors of a central unit and an ith background unit; minimum min (SF) of elements at the same position in S similarity matricesi) As the values of the elements at the corresponding positions of the similarity matrix SF, the similarity matrix SF is calculated as follows:wherein, min (SF)i) Minimum value of elements at the same position in S similarity matrices, max (SF)i) The maximum value of the elements at the same position in the S similarity matrixes is obtained; the similarity matrix SF is the similarity significance map SF.
The method for detecting the infrared dim target based on multi-feature fusion includes the steps of performing dot multiplication on a first significant map and a second significant map, obtaining a final significant map by fusing multiple characteristics of the infrared dim target, and segmenting a calculation threshold of the final significant map to obtain a final detection result, and specifically includes the following steps: performing dot multiplication operation on the covariance significant map and the similarity significant map to obtain a final detection significant map SM: SM ═ COV [ < SF >, calculates the division threshold Th, performs threshold division on the final detection significance map SM to obtain the final detection result map Iout。
In general, the technical scheme provided by the invention fully utilizes local prior information of the infrared dim target, and has the following technical characteristics and beneficial effects:
(1) since the shape of the infrared small target is isotropic, the background edges are anisotropic. They have a difference in characteristics and therefore require a fine-partitioning of the window, i.e. forming cells.
(2) The gray difference exists between the target and the neighborhood background, so the target needs to be enhanced by the gray difference between the target and the background, and the background needs to be suppressed.
(3) In order to separate the target, not only the gray scale comparison between the target and the background needs to be considered, but also the prior knowledge such as covariance values of gray scale differences and distance differences between the central pixel and the edge pixel of the sliding window, similarity factors and the like needs to be used for segmenting the target, so as to inhibit the background.
(4) The saliency map may still have certain clutter or noise, but the target has been significantly enhanced, so the residual false alarm is removed by segmentation with a constant false alarm rate method.
The novel method for detecting the complex background infrared small and weak target based on multi-feature fusion effectively inhibits complex clutter in an infrared image and improves the precision of detecting the small and weak target.
Drawings
FIG. 1 is an overall framework of the present invention;
FIG. 2 is a basic flow diagram of the present invention;
FIG. 3 is a graph of a sliding window distribution for calculating gray contrast according to the present invention;
FIG. 4 is a sliding window distribution plot for calculating a similarity factor according to the present invention;
FIG. 5(a) is an infrared original image according to an embodiment of the present invention;
FIG. 5(b) is a final detection significance map obtained by the algorithm proposed by the present invention according to the embodiment of the present invention;
FIG. 5(c) is a graph showing the results of the detection according to the embodiment of the present invention.
Detailed Description
The invention will be further elucidated with reference to the drawings and the detailed description:
referring to fig. 1 and fig. 2, a method for detecting an infrared weak and small target under a complex background in this embodiment includes the following steps:
step 1: inputting an infrared image I to be detected with the size of H multiplied by Win;
Step 2: image IinConverting the image into a gray image I;
step 3.1: a19 multiplied by 19 sliding window is constructed by taking a pixel point S in a gray level image I as a center and is divided into 9 units, and a central unit S0The size is 5 × 5, the background region cells S1, S3, S6, S8 are all 7 × 7 in size, S2, S7 are 5 × 7 in size, and S4, S5 are 7 × 5 in size, as shown in fig. 3;
step 3.2: calculating the pooling brightness of all background area units, namely the average value of all pixel gray levels in the unit, wherein the specific calculation method comprises the following steps:
wherein HijThe gray value of the jth pixel of the ith background area unit in the local background is represented, then muiExpressing the gray value mean value of the ith background area unit, and expressing the number of pixel points in an area block by N;
step 3.3: taking 4 units with the maximum gray average value in 8 units in the background area, and calculating the gray average value mu of the 4 unitsmaxThe specific solution is as follows:
wherein, MAX4(. represents the maximum four gray values, μ, takenmaxRepresents the mean value thereof;
step 3.4: comparing the mean gray level of the center cellAnd mumaxSize of (1), ifThe central unit pixel is enhanced and the grey value is set to be I (I, j) × (mu)0/μmax)2(I, j) represents the gray value of the pixel at the (I, j) position in the central cell), otherwise, the suppression is performed, the gray value is set to 0, and the specific calculation method is as follows:
step 3.5: moving the sliding window from top to bottom from left to right, wherein the step length is 5, until all pixels of the gray level image I are calculated through the steps 3.1-3.4, and obtaining a background suppression image T;
step 4.1: a 5 × 5 sliding window is constructed by taking a pixel point T in the background suppression image T as a center, if the pixel value in the sliding window is not 0, the distance and the gray difference between each pixel in the sliding window and the center pixel are calculated, so that the covariance coefficient of the whole sliding window is calculated, the coefficient is taken as the value of the center pixel of the sliding window, and the specific calculation method of the distance mean value and the gray difference mean value between each pixel in the sliding window and the center pixel is as follows:
wherein i and j respectively represent x and y direction coordinates of pixel points in the sliding window,taking 3, I as the mean valueijIs the gray value of the pixel with coordinates of (i, j) point,taking the gray value of the pixel as the gray value of the center pixel point, the invention takes I3,3N is the number of pixels in the sliding window, and 25 are taken;
step 4.2: the specific solution of the covariance coefficient of the sliding window is as follows:
wherein Cov (Dis, GV) represents a covariance coefficient of the sliding window, GV represents a gray value difference between a center point of the sliding window and an edge pixel point, Dis represents a distance difference between the center point of the sliding window and the edge pixel point, and SL is a side length of the sliding window, which is 5;
step 4.3: moving the sliding windows from top to bottom from left to right, wherein the step length is 1, until all pixels of the background suppression image T are calculated through the steps 4.1-4.2, and obtaining a covariance significant map COV consisting of covariance coefficients of each sliding window;
step 5.1: constructing a 25 × 25 sliding window by taking a pixel point p in the background suppression image T as a center, wherein the windows are all 9 units, and the central unit is called SM0The cells SM around itijIs a local background region, as in fig. 4;
step 5.2: similarity factors SF between the 8 background cells and the center cell are calculated respectivelyiThe specific solution is as follows:
in the formula, T represents a background suppression image, (s, T) represents a pixel point coordinate, and T (s, T) represents a pixel gray value of the point;
step 5.3: moving the sliding window from left to right from top to bottom by a step size of 1 until all pixels of the image T are calculated in the above step 5.2, obtaining 8 similarity matrices, wherein an ith (i ═ 1,2, …,8) similarity matrix is composed of similarity factors of a central unit and an ith background unit;
step 5.4: taking the minimum value MIN (SF) of elements at the same position in 8 similarity matricesi) As the value of the element at the corresponding position of the similarity matrix SF, the specific calculation method of the similarity saliency map SF is as follows:
step 6: performing dot multiplication operation on the covariance significant map COV and the similarity significant map SF to obtain a final detection significant map:
SM=COV⊙SF
and 7: calculating a segmentation threshold Th by using a formula, and performing threshold segmentation on the significance map SM to obtain a final detection result map Iout:
Th=μ+λ×σ
Wherein mu and sigma are the mean and standard deviation of the final significant map SM, and lambda is a fixed parameter, and the invention takes 4.
Claims (5)
1. The infrared dim target detection method based on multi-feature fusion is characterized by comprising the following steps: the method comprises the following steps: firstly, based on the characteristic that the local gray value of an infrared weak target is large, the gray contrast ratio of the target and a neighborhood background is utilized to enhance a real target and inhibit a part of complex backgrounds; secondly, detecting the target by calculating the distance between edge pixel points and central pixel points of the infrared dim target and the covariance of gray difference by utilizing the characteristic that the gray information of the infrared dim target accords with two-dimensional Gaussian distribution to obtain a first significant image; thirdly, detecting the target by calculating a similarity factor by utilizing the characteristic that the similarity between the infrared weak small target and the neighborhood is low to obtain a second significant image; and finally, performing dot multiplication on the first significant map and the second significant map, obtaining a final significant map by fusing multiple characteristics of the infrared weak and small target, and segmenting a calculation threshold of the final significant map to obtain a final detection result.
2. The infrared small and weak target detection method based on multi-feature fusion as claimed in claim 1, characterized in that: the method for enhancing the real target and inhibiting part of the complex background by utilizing the gray contrast of the target and the neighborhood background specifically comprises the following steps: to-be-detected infrared image IinConverting the image into a gray image I; construction of slide by taking one pixel point s in gray level image I as centerA movable window with a central unit of S0The cells S around itiIs a local background region; taking a plurality of cells with the maximum gray average value in the local background area, and calculating the gray average value mu of the cellsmax(ii) a Comparing the mean gray level of the center cellAnd mumaxSize of (1), ifThe central unit pixel is enhanced and the grey value is set to be I (I, j) × (mu)0/μmax)2I (I, j) represents the pixel grey value at the (I, j) position in the central cell, otherwise the inhibition is performed, the grey value is set to 0; and moving the sliding window until all pixels of the gray-scale image I are subjected to the calculation to obtain a background suppression image T.
3. The infrared small and weak target detection method based on multi-feature fusion as claimed in claim 2, characterized in that: the method comprises the following steps of detecting a target by calculating the distance between edge pixel points and central pixel points of an infrared weak and small target and the covariance of gray level difference to obtain a first significant image: constructing a sliding window by taking a pixel point T in the background suppression image T as a center, and if the pixel value in the sliding window is not 0, calculating the distance from each pixel in the sliding window to the center pixel and the gray difference, thereby calculating the covariance coefficient of the whole sliding window; the sliding window is moved until all pixels of the background suppression image T are subjected to the above calculation, and a covariance significance map COV composed of covariance coefficients of each sliding window is obtained.
4. The infrared small and weak target detection method based on multi-feature fusion as claimed in claim 3, characterized in that: the method specifically comprises the following steps of detecting the target by calculating the similarity factor by utilizing the characteristic that the similarity between the infrared small target and the neighborhood is low: sliding is constructed by taking one pixel point p in background suppression image T as centerWindow, center unit called SM0S background units SM around itijIs a local background region; separately calculate the background units SMijAnd a central unit SM0Similarity factor between SFi(ii) a Moving the sliding window until all pixels of the background suppression image T are subjected to the calculation to obtain S similarity matrixes, wherein the ith similarity matrix consists of similarity factors of a central unit and an ith background unit; minimum min (SF) of elements at the same position in S similarity matricesi) As the values of the elements at the corresponding positions of the similarity matrix SF, the similarity matrix SF is calculated as follows:wherein, min (SF)i) Minimum value of elements at the same position in S similarity matrices, max (SF)i) The maximum value of the elements at the same position in the S similarity matrixes is obtained; the similarity matrix SF is the similarity significance map SF.
5. The infrared small and weak target detection method based on multi-feature fusion as claimed in claim 4, characterized in that: performing point multiplication on the first significant map and the second significant map, obtaining a final significant map by fusing multiple characteristics of the infrared weak and small target, and segmenting a calculation threshold of the final significant map to obtain a final detection result, wherein the method specifically comprises the following steps: performing dot multiplication operation on the covariance significant map and the similarity significant map to obtain a final detection significant map SM: SM ═ COV [ < SF >, calculates the division threshold Th, performs threshold division on the final detection significance map SM to obtain the final detection result map Iout。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111078520.0A CN113822352B (en) | 2021-09-15 | 2021-09-15 | Infrared dim target detection method based on multi-feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111078520.0A CN113822352B (en) | 2021-09-15 | 2021-09-15 | Infrared dim target detection method based on multi-feature fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113822352A true CN113822352A (en) | 2021-12-21 |
CN113822352B CN113822352B (en) | 2024-05-17 |
Family
ID=78922528
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111078520.0A Active CN113822352B (en) | 2021-09-15 | 2021-09-15 | Infrared dim target detection method based on multi-feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113822352B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114549642A (en) * | 2022-02-10 | 2022-05-27 | 中国科学院上海技术物理研究所 | Low-contrast infrared weak and small target detection method |
CN115797872A (en) * | 2023-01-31 | 2023-03-14 | 捷易(天津)包装制品有限公司 | Machine vision-based packaging defect identification method, system, equipment and medium |
CN115908807A (en) * | 2022-11-24 | 2023-04-04 | 中国科学院国家空间科学中心 | Method, system, computer equipment and medium for quickly detecting weak and small targets |
CN116363135A (en) * | 2023-06-01 | 2023-06-30 | 南京信息工程大学 | Infrared target detection method, device, medium and equipment based on Gaussian similarity |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102324021A (en) * | 2011-09-05 | 2012-01-18 | 电子科技大学 | Infrared dim-small target detection method based on shear wave conversion |
CN107563370A (en) * | 2017-07-07 | 2018-01-09 | 西北工业大学 | Visual attention mechanism-based marine infrared target detection method |
CN108062523A (en) * | 2017-12-13 | 2018-05-22 | 苏州长风航空电子有限公司 | A kind of infrared remote small target detecting method |
CN108256519A (en) * | 2017-12-13 | 2018-07-06 | 苏州长风航空电子有限公司 | A kind of notable detection method of infrared image based on global and local interaction |
CN108460794A (en) * | 2016-12-12 | 2018-08-28 | 南京理工大学 | A kind of infrared well-marked target detection method of binocular solid and system |
KR20180096101A (en) * | 2017-02-20 | 2018-08-29 | 엘아이지넥스원 주식회사 | Apparatus and Method for Intelligent Infrared Image Fusion |
CN109272489A (en) * | 2018-08-21 | 2019-01-25 | 西安电子科技大学 | Inhibit the method for detecting infrared puniness target with multiple dimensioned local entropy based on background |
WO2019144581A1 (en) * | 2018-01-29 | 2019-08-01 | 江苏宇特光电科技股份有限公司 | Smart infrared image scene enhancement method |
CN110706208A (en) * | 2019-09-13 | 2020-01-17 | 东南大学 | Infrared dim target detection method based on tensor mean square minimum error |
CN111353496A (en) * | 2018-12-20 | 2020-06-30 | 中国科学院沈阳自动化研究所 | Real-time detection method for infrared small and weak target |
CN111899200A (en) * | 2020-08-10 | 2020-11-06 | 国科天成(北京)科技有限公司 | Infrared image enhancement method based on 3D filtering |
CN112288668A (en) * | 2020-09-22 | 2021-01-29 | 西北工业大学 | Infrared and visible light image fusion method based on depth unsupervised dense convolution network |
CN112418090A (en) * | 2020-11-23 | 2021-02-26 | 中国科学院西安光学精密机械研究所 | Real-time detection method for infrared small and weak target under sky background |
CN113111878A (en) * | 2021-04-30 | 2021-07-13 | 中北大学 | Infrared weak and small target detection method under complex background |
-
2021
- 2021-09-15 CN CN202111078520.0A patent/CN113822352B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102324021A (en) * | 2011-09-05 | 2012-01-18 | 电子科技大学 | Infrared dim-small target detection method based on shear wave conversion |
CN108460794A (en) * | 2016-12-12 | 2018-08-28 | 南京理工大学 | A kind of infrared well-marked target detection method of binocular solid and system |
KR20180096101A (en) * | 2017-02-20 | 2018-08-29 | 엘아이지넥스원 주식회사 | Apparatus and Method for Intelligent Infrared Image Fusion |
CN107563370A (en) * | 2017-07-07 | 2018-01-09 | 西北工业大学 | Visual attention mechanism-based marine infrared target detection method |
CN108062523A (en) * | 2017-12-13 | 2018-05-22 | 苏州长风航空电子有限公司 | A kind of infrared remote small target detecting method |
CN108256519A (en) * | 2017-12-13 | 2018-07-06 | 苏州长风航空电子有限公司 | A kind of notable detection method of infrared image based on global and local interaction |
WO2019144581A1 (en) * | 2018-01-29 | 2019-08-01 | 江苏宇特光电科技股份有限公司 | Smart infrared image scene enhancement method |
CN109272489A (en) * | 2018-08-21 | 2019-01-25 | 西安电子科技大学 | Inhibit the method for detecting infrared puniness target with multiple dimensioned local entropy based on background |
CN111353496A (en) * | 2018-12-20 | 2020-06-30 | 中国科学院沈阳自动化研究所 | Real-time detection method for infrared small and weak target |
CN110706208A (en) * | 2019-09-13 | 2020-01-17 | 东南大学 | Infrared dim target detection method based on tensor mean square minimum error |
CN111899200A (en) * | 2020-08-10 | 2020-11-06 | 国科天成(北京)科技有限公司 | Infrared image enhancement method based on 3D filtering |
CN112288668A (en) * | 2020-09-22 | 2021-01-29 | 西北工业大学 | Infrared and visible light image fusion method based on depth unsupervised dense convolution network |
CN112418090A (en) * | 2020-11-23 | 2021-02-26 | 中国科学院西安光学精密机械研究所 | Real-time detection method for infrared small and weak target under sky background |
CN113111878A (en) * | 2021-04-30 | 2021-07-13 | 中北大学 | Infrared weak and small target detection method under complex background |
Non-Patent Citations (1)
Title |
---|
刘昆;刘卫东;: "基于加权融合特征与Ostu分割的红外弱小目标检测算法", 计算机工程, no. 07 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114549642A (en) * | 2022-02-10 | 2022-05-27 | 中国科学院上海技术物理研究所 | Low-contrast infrared weak and small target detection method |
CN114549642B (en) * | 2022-02-10 | 2024-05-10 | 中国科学院上海技术物理研究所 | Low-contrast infrared dim target detection method |
CN115908807A (en) * | 2022-11-24 | 2023-04-04 | 中国科学院国家空间科学中心 | Method, system, computer equipment and medium for quickly detecting weak and small targets |
CN115797872A (en) * | 2023-01-31 | 2023-03-14 | 捷易(天津)包装制品有限公司 | Machine vision-based packaging defect identification method, system, equipment and medium |
CN116363135A (en) * | 2023-06-01 | 2023-06-30 | 南京信息工程大学 | Infrared target detection method, device, medium and equipment based on Gaussian similarity |
CN116363135B (en) * | 2023-06-01 | 2023-09-12 | 南京信息工程大学 | Infrared target detection method, device, medium and equipment based on Gaussian similarity |
Also Published As
Publication number | Publication date |
---|---|
CN113822352B (en) | 2024-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108615027B (en) | Method for counting video crowd based on long-term and short-term memory-weighted neural network | |
CN113822352B (en) | Infrared dim target detection method based on multi-feature fusion | |
CN110599537A (en) | Mask R-CNN-based unmanned aerial vehicle image building area calculation method and system | |
CN107633226B (en) | Human body motion tracking feature processing method | |
CN109934224B (en) | Small target detection method based on Markov random field and visual contrast mechanism | |
CN109460764B (en) | Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method | |
CN109685045B (en) | Moving target video tracking method and system | |
CN113111878B (en) | Infrared weak and small target detection method under complex background | |
CN108846844B (en) | Sea surface target detection method based on sea antenna | |
CN105405138B (en) | Waterborne target tracking based on conspicuousness detection | |
CN106204594A (en) | A kind of direction detection method of dispersivity moving object based on video image | |
CN112308883A (en) | Multi-ship fusion tracking method based on visible light and infrared images | |
CN108388901B (en) | Collaborative significant target detection method based on space-semantic channel | |
CN111274964B (en) | Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle | |
US11367206B2 (en) | Edge-guided ranking loss for monocular depth prediction | |
CN110827262A (en) | Weak and small target detection method based on continuous limited frame infrared image | |
CN106056078A (en) | Crowd density estimation method based on multi-feature regression ensemble learning | |
CN116883588A (en) | Method and system for quickly reconstructing three-dimensional point cloud under large scene | |
CN110751670B (en) | Target tracking method based on fusion | |
CN112613565B (en) | Anti-occlusion tracking method based on multi-feature fusion and adaptive learning rate updating | |
Kim et al. | Object Modeling with Color Arrangement for Region‐Based Tracking | |
Schulz et al. | Object-class segmentation using deep convolutional neural networks | |
CN110751671B (en) | Target tracking method based on kernel correlation filtering and motion estimation | |
CN116188943A (en) | Solar radio spectrum burst information detection method and device | |
CN113688747B (en) | Method, system, device and storage medium for detecting personnel target in image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |