CN117576488B - Infrared dim target detection method based on target image reconstruction - Google Patents
Infrared dim target detection method based on target image reconstruction Download PDFInfo
- Publication number
- CN117576488B CN117576488B CN202410064041.0A CN202410064041A CN117576488B CN 117576488 B CN117576488 B CN 117576488B CN 202410064041 A CN202410064041 A CN 202410064041A CN 117576488 B CN117576488 B CN 117576488B
- Authority
- CN
- China
- Prior art keywords
- target
- targets
- image
- module
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 27
- 238000000034 method Methods 0.000 claims abstract description 20
- 238000002372 labelling Methods 0.000 claims abstract description 11
- 238000012549 training Methods 0.000 claims abstract description 8
- 238000012216 screening Methods 0.000 claims abstract description 7
- 230000004913 activation Effects 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000004927 fusion Effects 0.000 claims description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000002349 favourable effect Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an infrared dim target detection method based on target image reconstruction, and relates to the technical field of image target detection. The method comprises the following steps: convoluting the original image to extract the edge information of the infrared image highlight, and screening all the bright spot positions in the image in a connected domain mode to form a target set; reconstructing a reconstruction map for accommodating all targets in the target set according to the bright spot targets screened in the target set, and recording the position sequence number information of each weak target; in the twin network training process, calculating the labeling information and the IOU value of the weak and small targets, and updating the corresponding labeling information according to the position sequence number information of all the weak and small targets; and inputting the reconstructed image into the trained twin network model for target detection, restoring the detected true target to the position in the original image, and outputting a final result. According to the invention, through target image reconstruction and a twin network model, infrared weak and small targets can be efficiently identified in samples with unbalanced true and false targets caused by noise.
Description
Technical Field
The invention relates to the technical field of image target detection, in particular to an infrared dim target detection method based on target image reconstruction.
Background
The infrared detector has the characteristics of full day time, all weather, no influence of weather and the like, and is widely used in the fields of space detection, strategic early warning and the like. High-efficiency, robust and reliable infrared dim target detection under a complex background is a very critical technology. Because the infrared image is far away from imaging, the resolution ratio is low, details and texture information are lacking, the background of the target is complex, the target is easy to submerge in the background, the distance between the traditional target to be detected and the infrared sensor is far away, the proportion of the imaging area occupied by the target is small, the obvious characteristics of the shape, the structure, the texture and the like of the target are difficult to extract, the signal intensity is weak, isolated noise points are similar to the point targets, the noise interference is strong, and therefore, the higher false alarm rate is caused, and the early warning requirement cannot be met; along with the development of an AI algorithm, the target detection capability is greatly improved, but when the target is detected, the target detection capability is influenced by calculation force and the algorithm, and the high-resolution original image can be applied to the AI detection algorithm only after being compressed, so that the detection rate of the weak target is greatly influenced.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an infrared dim target detection method based on target image reconstruction.
The aim of the invention is realized by the following technical scheme:
an infrared dim target detection method based on target image reconstruction comprises the following steps:
step 1: convoluting the original image to extract the edge information of the infrared image highlight, and screening all the bright spot positions in the image in a connected domain mode to form a target set;
step 2: reconstructing a reconstruction map for accommodating all targets in the target set according to the bright spot targets screened in the target set, and recording the position sequence number information of each weak target;
step 3: in the twin network training process, calculating the labeling information and the IOU value of the weak and small targets, setting an IOU threshold value, marking the targets with the IOU values larger than the threshold value as true targets, otherwise marking the targets as false targets, and updating the corresponding labeling information according to the position sequence number information of all the weak and small targets;
step 4: and inputting the reconstructed image into the trained twin network model for target detection, restoring the detected true target to the position in the original image, and outputting a final result.
Further, the step 1 specifically includes:
s11: convolving the original image I, and extracting Edge information Edge of a target in the image by the following modes:
wherein,is a convolution kernel; then generating an Edge map Edge (x, y), wherein x, y represents spatial coordinates in the image;
s12: performing binarization operation on the Edge graph Edge (x, y) to obtain a binarization graph Binary (x, y):
;
wherein,thresha threshold value for an image pixel;
s13: in a Binary image Binary (x, y), a connected domain calculation is performed on a part with a pixel value of 1, and the result is recorded as an EH image;
s14: recording the position and the length and width information of the external rectangle of each connected domain in the EH graph result, setting a threshold value, screening out the effective bright spot target position, and forming a target set:
wherein,w i is the firstiThe length of the circumscribed matrix of the individual targets,h i is the firstiThe width of the circumscribed matrix of the individual targets,thresh area is an aspect ratio threshold value,thresh wh As the threshold value of the connected domain area, 1 indicates that the i-th connected domain is an effective target, and 0 indicates that the i-th connected domain is an ineffective target.
Further, the step 2 specifically includes:
s21: cutting a fixed-size ROI (region of interest) of a target position from an original image according to the screened target set, wherein the ROI comprises a target and partial background information of the target;
s22: reconstructing a reconstruction map with a background pixel value of 0 according to the target set so as to accommodate all targets in the target set;
s23: and (3) the cut ROI area is scattered and copied into a reconstruction map, and the serial numbers and the corresponding position information of all targets are recorded.
Further, the labeling information of the small target in the step 3labelThe method is characterized in that detected targets are divided into a true target and a false target:
wherein the method comprises the steps ofthresh iou Is a detection target IOU threshold, 0 represents a true target class, 1 represents a false target classAnd (3) the other steps.
Further, the twin network comprises a target image reconstruction module, wherein the target image reconstruction module comprises a convolution layer and a Relu activation function layer.
Further, the twin network performs feature fusion on the reconstructed image through a staggered convolution module, and three feature graphs with different sizes are output.
Further, the staggered convolution module comprises a CBL module, a CBS module, a first SFB module, a second SFB module and a DWB module, wherein the CBL module comprises a convolution layer, a normalization layer and a LeakyRelu activation function layer, the CBS module comprises a convolution layer, a normalization layer and a Silu activation function layer, the first SFB module is used for performing a ChannelSheffle operation after connecting three CBL modules with one CBL module, the second SFB module is used for performing a ChannelSheffe operation after connecting three CBL modules with one Slice module, and the DWB module comprises two connected CBL modules.
The beneficial effects of the invention are as follows:
according to the method for detecting the infrared weak and small target based on target image reconstruction, the input original image is reconstructed through the target image reconstruction mode, so that the problem that the infrared weak and small target is lost due to compression in image detection is avoided. The problem of infrared weak and small target detection is converted into the problem of two classification of true and false bright spots through image reconstruction, and after the image reconstruction, the target and the background information are reserved to the maximum extent, and the target and the background information which are favorable for judgment are reserved. Finally, the infrared weak and small targets are efficiently identified in samples with unbalanced real and false targets caused by noise by a training mode of a twin network, so that the method has stronger robustness.
Drawings
FIG. 1 is a diagram of a model of a twin network.
Detailed Description
The technical solutions of the present invention will be clearly and completely described below with reference to the embodiments, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by a person skilled in the art without any inventive effort, are intended to be within the scope of the present invention, based on the embodiments of the present invention.
Referring to fig. 1, the present invention provides a technical solution:
an infrared dim target detection method based on target image reconstruction comprises the following steps:
step 1: convoluting the original image to extract the edge information of the infrared image highlight, and screening all the bright spot positions in the image in a connected domain mode to form a target set;
step 2: reconstructing a reconstruction map for accommodating all targets in the target set according to the bright spot targets screened in the target set, and recording the position sequence number information of each weak target;
step 3: in the twin network training process, calculating the labeling information and the IOU value of the weak and small targets, and updating the corresponding labeling information according to the position sequence number information of all the weak and small targets;
step 4: and inputting the reconstructed image into the trained twin network model for target detection, restoring the detected true target to the position in the original image, and outputting a final result.
Further, the step 1 specifically includes:
s11: convolving the original image I, and extracting Edge information Edge of a target in the image by the following modes:
wherein,is a convolution kernel; then generating an Edge map Edge (x, y), wherein x, y represents spatial coordinates in the image;
s12: performing binarization operation on the Edge graph Edge (x, y) to obtain a binarization graph Binary (x, y):
;
wherein,thresha threshold value for an image pixel;
s13: in a Binary image Binary (x, y), a connected domain calculation is performed on a part with a pixel value of 1, and the result is recorded as an EH image;
s14: recording the position and the length and width information of the external rectangle of each connected domain in the EH graph result, setting a threshold value, screening out the effective bright spot target position, and forming a target set:
wherein,w i is the firstiThe length of the circumscribed matrix of the individual targets,h i is the firstiThe width of the circumscribed matrix of the individual targets,thresh area is an aspect ratio threshold value,thresh wh As the threshold value of the connected domain area, 1 indicates that the i-th connected domain is an effective target, and 0 indicates that the i-th connected domain is an ineffective target.
Further, the step 2 specifically includes:
s21: cutting a fixed-size ROI (region of interest) of a target position from an original image according to the screened target set, wherein the ROI comprises a target and partial background information of the target; in this embodiment, the size of the ROI area is set to 32×32, and the ROI area is expressed as:
s22: reconstructing a reconstruction map with a background of 0 according to the target set so as to accommodate all targets in the target set;
s23: and (3) the cut ROI area is scattered and copied into a reconstruction map, and the serial numbers and the corresponding position information of all targets are recorded.
As shown in fig. 1, an original image with a size of m×n×255 is input to a target image reconstruction module, and the target image reconstruction module performs the method as in step 2 to reconstruct the target set extracted from the original image to output a reconstructed image containing all the targets in the target set.
In this embodiment, a 640 x 640 reconstruction map is selected to accommodate 20 x 20=400 32 x 32 targets, and the record information of the clipped ROI area is as follows:
further, the labeling information of the small target in the step 3labelThe detected targets are classified into a true target (label=0) and a false target (label=1):
wherein the method comprises the steps ofthresh iou Is a detection target IOU threshold.
In this embodiment, a given dataset is scaled 7:2:1 as training set, verifier and test set; in the step 3, in the network training process, the IOU value (cross-over ratio) of the weak and small target is calculated, the IOU value characterizes the coincidence degree of the two divided areas, and the IOU value calculating method comprises the following steps:
because the higher the IOU value of the weak and small target to be detected is, the larger the probability that the target is a true target is, the target with the IOU value larger than the threshold is marked as the true target by setting the IOU threshold, otherwise, the target is the false target.
In this embodiment, the twin network includes a target image reconstruction module, where the target image reconstruction module includes a convolution layer and a Relu activation function layer.
In this embodiment, the twin network performs feature fusion on the reconstructed image through the interleaving convolution module, and outputs three feature graphs with different sizes, which are respectively 40×40×255, 20×20×255, and 10×10×255.
The model diagram of the twin network is shown in fig. 1, the staggered convolution module comprises a CBL module, a CBS module, a first SFB module, a second SFB module and a DWB module, the CBL module comprises a convolution layer, a normalization layer and a leakyrlu activation function layer, the CBS module comprises a convolution layer, a normalization layer and a sialu activation function layer, the first SFB module performs a ChannelShuffle operation after connecting three CBL modules with one CBL module, the second SFB module performs a ChannelShuffle operation after connecting three CBL modules with one Slice module, and the DWB module comprises two connected CBL modules. Wherein the ChannelShuffle operation essentially staggers the different sets of channel features. The Slice module separates the data channels inferred to the module, and the module achieves the effect of extracting richer features under the condition that the number of the input and output channels and the size of the feature map are not changed.
According to the method for detecting the infrared weak and small target based on target image reconstruction, the input original image is reconstructed through the target image reconstruction, so that the problem that the infrared weak and small target is lost due to compression in image detection is avoided. The problem of infrared weak and small target detection is converted into the problem of true and false bright spots through image reconstruction, and after the image reconstruction, the target and the background information are reserved to the maximum extent, and the target and the background information which are favorable for judgment are reserved. Finally, the infrared weak and small targets are efficiently identified in samples with unbalanced real and false targets caused by noise by a training mode of a twin network, so that the method has stronger robustness.
The foregoing is merely a preferred embodiment of the invention, and it is to be understood that the invention is not limited to the form disclosed herein but is not to be construed as excluding other embodiments, but is capable of numerous other combinations, modifications and environments and is capable of modifications within the scope of the inventive concept, either as taught or as a matter of routine skill or knowledge in the relevant art. And that modifications and variations which do not depart from the spirit and scope of the invention are intended to be within the scope of the appended claims.
Claims (5)
1. An infrared dim target detection method based on target image reconstruction is characterized by comprising the following steps:
step 1: convoluting the original image to extract the edge information of the infrared image highlight, and screening all the bright spot positions in the image in a connected domain mode to form a target set;
step 2: reconstructing a reconstruction map for accommodating all targets in the target set according to the bright spot targets screened in the target set, and recording the position sequence number information of each weak target;
step 3: in the twin network training process, calculating the labeling information and the IOU value of the weak and small targets, setting an IOU threshold value, marking the targets with the IOU values larger than the threshold value as true targets, otherwise marking the targets as false targets, and updating the corresponding labeling information according to the position sequence number information of all the weak and small targets;
step 4: inputting the reconstructed image into the trained twin network model for target detection, restoring the detected true target to the position in the original image, and outputting a final result;
the step 1 specifically includes:
s11: convolving the original image I, and extracting Edge information Edge of a target in the image by the following modes:
wherein,is a convolution kernel; then generating an Edge map Edge (x, y), wherein x, y represents spatial coordinates in the image;
s12: performing binarization operation on the Edge graph Edge (x, y) to obtain a binarization graph Binary (x, y):
;
wherein,thresha threshold value for an image pixel;
s13: in a Binary image Binary (x, y), a connected domain calculation is performed on a part with a pixel value of 1, and the result is recorded as an EH image;
s14: recording the position and the length and width information of the external rectangle of each connected domain in the EH graph result, setting a threshold value, screening out the effective bright spot target position, and forming a target set:
wherein,w i is the firstiThe length of the circumscribed matrix of the individual targets,h i is the firstiThe width of the circumscribed matrix of the individual targets,thresh area is an aspect ratio threshold value,thresh wh 1 represents that the ith connected domain is an effective target, and 0 represents that the ith connected domain is an ineffective target;
the step 2 specifically includes:
s21: cutting a fixed-size ROI (region of interest) of a target position from an original image according to the screened target set, wherein the ROI comprises a target and partial background information of the target;
s22: reconstructing a reconstruction map with a background pixel value of 0 according to the target set so as to accommodate all targets in the target set;
s23: and (3) the cut ROI area is scattered and copied into a reconstruction map, and the serial numbers and the corresponding position information of all targets are recorded.
2. The method for detecting a small infrared target based on target image reconstruction according to claim 1, wherein the labeling information of the small target in the step 3 is as followslabelThe method is characterized in that detected targets are divided into a true target and a false target:
wherein,thresh iou is a detection target IOU threshold, 0 represents a true target class, and 1 represents a false target class.
3. The method for detecting infrared small targets based on target image reconstruction according to claim 1, wherein the twin network comprises a target image reconstruction module, and the target image reconstruction module comprises a convolution layer and a Relu activation function layer.
4. The method for detecting the infrared dim target based on target image reconstruction according to claim 1, wherein the twin network performs feature fusion on the reconstructed image through a staggered convolution module and outputs three feature maps with different sizes.
5. The method for detecting infrared dim targets based on target image reconstruction according to claim 4, wherein the staggered convolution module comprises a CBL module, a CBS module, a first SFB module, a second SFB module and a DWB module, the CBL module comprises a convolution layer, a normalization layer and a leakyrlu activation function layer, the CBS module comprises a convolution layer, a normalization layer and a silauactivation function layer, the first SFB module performs a ChannelShuffle operation after three CBL modules are connected with one CBL module, the second SFB module performs a ChannelShuffle operation after three CBL modules are connected with one Slice module, and the DWB module comprises two connected CBL modules.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410064041.0A CN117576488B (en) | 2024-01-17 | 2024-01-17 | Infrared dim target detection method based on target image reconstruction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410064041.0A CN117576488B (en) | 2024-01-17 | 2024-01-17 | Infrared dim target detection method based on target image reconstruction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117576488A CN117576488A (en) | 2024-02-20 |
CN117576488B true CN117576488B (en) | 2024-04-05 |
Family
ID=89886727
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410064041.0A Active CN117576488B (en) | 2024-01-17 | 2024-01-17 | Infrared dim target detection method based on target image reconstruction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117576488B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117809052B (en) * | 2024-03-01 | 2024-05-14 | 海豚乐智科技(成都)有限责任公司 | Block target detection and feature extraction method, device and storage medium |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123529A (en) * | 2013-04-25 | 2014-10-29 | 株式会社理光 | Human hand detection method and system thereof |
CN107133627A (en) * | 2017-04-01 | 2017-09-05 | 深圳市欢创科技有限公司 | Infrared light spot center point extracting method and device |
CN109670517A (en) * | 2018-12-24 | 2019-04-23 | 北京旷视科技有限公司 | Object detection method, device, electronic equipment and target detection model |
CN112101434A (en) * | 2020-09-04 | 2020-12-18 | 河南大学 | Infrared image weak and small target detection method based on improved YOLO v3 |
CN112686842A (en) * | 2020-12-21 | 2021-04-20 | 苏州炫感信息科技有限公司 | Light spot detection method and device, electronic equipment and readable storage medium |
CN112818822A (en) * | 2021-01-28 | 2021-05-18 | 中国空气动力研究与发展中心超高速空气动力研究所 | Automatic identification method for damaged area of aerospace composite material |
CN114241274A (en) * | 2021-11-30 | 2022-03-25 | 电子科技大学 | Small target detection method based on super-resolution multi-scale feature fusion |
CN114299111A (en) * | 2021-12-21 | 2022-04-08 | 中国矿业大学 | Infrared dim and small target tracking method based on semi-supervised twin network |
CN114549959A (en) * | 2022-02-28 | 2022-05-27 | 西安电子科技大学广州研究院 | Infrared dim target real-time detection method and system based on target detection model |
CN115116137A (en) * | 2022-06-29 | 2022-09-27 | 河北工业大学 | Pedestrian detection method based on lightweight YOLO v5 network model and space-time memory mechanism |
CN115147613A (en) * | 2022-05-30 | 2022-10-04 | 天津理工大学 | Infrared small target detection method based on multidirectional fusion |
CN115223026A (en) * | 2022-07-28 | 2022-10-21 | 西安电子科技大学广州研究院 | Real-time detection method for lightweight infrared dim targets |
CN115272412A (en) * | 2022-08-02 | 2022-11-01 | 电子科技大学重庆微电子产业技术研究院 | Low, small and slow target detection method and tracking system based on edge calculation |
CN115601818A (en) * | 2022-11-29 | 2023-01-13 | 海豚乐智科技(成都)有限责任公司(Cn) | Lightweight visible light living body detection method and device |
CN116012659A (en) * | 2023-03-23 | 2023-04-25 | 海豚乐智科技(成都)有限责任公司 | Infrared target detection method and device, electronic equipment and storage medium |
CN116128916A (en) * | 2023-04-13 | 2023-05-16 | 中国科学院国家空间科学中心 | Infrared dim target enhancement method based on spatial energy flow contrast |
-
2024
- 2024-01-17 CN CN202410064041.0A patent/CN117576488B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123529A (en) * | 2013-04-25 | 2014-10-29 | 株式会社理光 | Human hand detection method and system thereof |
CN107133627A (en) * | 2017-04-01 | 2017-09-05 | 深圳市欢创科技有限公司 | Infrared light spot center point extracting method and device |
CN109670517A (en) * | 2018-12-24 | 2019-04-23 | 北京旷视科技有限公司 | Object detection method, device, electronic equipment and target detection model |
CN112101434A (en) * | 2020-09-04 | 2020-12-18 | 河南大学 | Infrared image weak and small target detection method based on improved YOLO v3 |
CN112686842A (en) * | 2020-12-21 | 2021-04-20 | 苏州炫感信息科技有限公司 | Light spot detection method and device, electronic equipment and readable storage medium |
CN112818822A (en) * | 2021-01-28 | 2021-05-18 | 中国空气动力研究与发展中心超高速空气动力研究所 | Automatic identification method for damaged area of aerospace composite material |
CN114241274A (en) * | 2021-11-30 | 2022-03-25 | 电子科技大学 | Small target detection method based on super-resolution multi-scale feature fusion |
CN114299111A (en) * | 2021-12-21 | 2022-04-08 | 中国矿业大学 | Infrared dim and small target tracking method based on semi-supervised twin network |
CN114549959A (en) * | 2022-02-28 | 2022-05-27 | 西安电子科技大学广州研究院 | Infrared dim target real-time detection method and system based on target detection model |
CN115147613A (en) * | 2022-05-30 | 2022-10-04 | 天津理工大学 | Infrared small target detection method based on multidirectional fusion |
CN115116137A (en) * | 2022-06-29 | 2022-09-27 | 河北工业大学 | Pedestrian detection method based on lightweight YOLO v5 network model and space-time memory mechanism |
CN115223026A (en) * | 2022-07-28 | 2022-10-21 | 西安电子科技大学广州研究院 | Real-time detection method for lightweight infrared dim targets |
CN115272412A (en) * | 2022-08-02 | 2022-11-01 | 电子科技大学重庆微电子产业技术研究院 | Low, small and slow target detection method and tracking system based on edge calculation |
CN115601818A (en) * | 2022-11-29 | 2023-01-13 | 海豚乐智科技(成都)有限责任公司(Cn) | Lightweight visible light living body detection method and device |
CN116012659A (en) * | 2023-03-23 | 2023-04-25 | 海豚乐智科技(成都)有限责任公司 | Infrared target detection method and device, electronic equipment and storage medium |
CN116128916A (en) * | 2023-04-13 | 2023-05-16 | 中国科学院国家空间科学中心 | Infrared dim target enhancement method based on spatial energy flow contrast |
Non-Patent Citations (4)
Title |
---|
DB-YOLO: A Duplicate Bilateral YOLO Network for Multi-Scale Ship Detection in SAR Images;Haozhen Zhu 等;《sensors》;20211206;1-15 * |
YOLO-FIRI: Improved YOLOv5 for Infrared Image Object Detection;SHASHA LI 等;《IEEE Access》;20211015;第54卷(第1期);141861-141875 * |
基于超分辨率与在线检测 DSST 的红外小目标跟踪;李斌 等;《红外技术》;20220720;第44卷(第7期);659-666 * |
轻量级红外目标检测算法研究;张上 等;《无线电工程》;20230423;1-10 * |
Also Published As
Publication number | Publication date |
---|---|
CN117576488A (en) | 2024-02-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117576488B (en) | Infrared dim target detection method based on target image reconstruction | |
CN108257158B (en) | Target prediction and tracking method based on recurrent neural network | |
CN107016357B (en) | Video pedestrian detection method based on time domain convolutional neural network | |
US7215798B2 (en) | Method for forgery recognition in fingerprint recognition by using a texture classification of gray scale differential images | |
CN110210475B (en) | License plate character image segmentation method based on non-binarization and edge detection | |
CN114820625B (en) | Automobile top block defect detection method | |
CN114240947B (en) | Construction method and device of sweep image database and computer equipment | |
CN110276295A (en) | Vehicle identification number detection recognition method and equipment | |
CN114445768A (en) | Target identification method and device, electronic equipment and storage medium | |
CN114299383A (en) | Remote sensing image target detection method based on integration of density map and attention mechanism | |
CN117422696A (en) | Belt wear state detection method based on improved YOLOv8-Efficient Net | |
CN116092179A (en) | Improved Yolox fall detection system | |
CN112801037A (en) | Face tampering detection method based on continuous inter-frame difference | |
CN114529462A (en) | Millimeter wave image target detection method and system based on improved YOLO V3-Tiny | |
CN114429577B (en) | Flag detection method, system and equipment based on high confidence labeling strategy | |
CN116052105A (en) | Pavement crack identification classification and area calculation method, system, equipment and terminal | |
CN114066937B (en) | Multi-target tracking method for large-scale remote sensing image | |
CN117475353A (en) | Video-based abnormal smoke identification method and system | |
CN115631197B (en) | Image processing method, device, medium, equipment and system | |
CN107992863B (en) | Multi-resolution grain insect variety visual identification method | |
CN111368625A (en) | Pedestrian target detection method based on cascade optimization | |
CN115100457A (en) | SAR image target detection method combining deep learning and CFAR | |
Forssén et al. | Robust multi-scale extraction of blob features | |
CN110472472B (en) | Airport detection method and device based on SAR remote sensing image | |
CN114925722A (en) | Perimeter security intrusion signal detection method based on generalized S transformation and transfer learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |