CN111161308A - Dual-band fusion target extraction method based on key point matching - Google Patents

Dual-band fusion target extraction method based on key point matching Download PDF

Info

Publication number
CN111161308A
CN111161308A CN201911313247.8A CN201911313247A CN111161308A CN 111161308 A CN111161308 A CN 111161308A CN 201911313247 A CN201911313247 A CN 201911313247A CN 111161308 A CN111161308 A CN 111161308A
Authority
CN
China
Prior art keywords
target
image
current frame
suspected
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911313247.8A
Other languages
Chinese (zh)
Inventor
印剑飞
王兴
杨俊彦
朱婧文
杨波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Aerospace Control Technology Institute
Original Assignee
Shanghai Aerospace Control Technology Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Aerospace Control Technology Institute filed Critical Shanghai Aerospace Control Technology Institute
Priority to CN201911313247.8A priority Critical patent/CN111161308A/en
Publication of CN111161308A publication Critical patent/CN111161308A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a double-waveband fusion target extraction method based on key point matching, belonging to the technical field of infrared weak and small target detection of double-waveband fusion; the method comprises the following steps that firstly, an infrared system shoots a target in real time by adopting medium waves and short waves; secondly, adjusting the sizes of the medium wave infrared image and the short wave infrared image of the front frame and the back frame; step three, performing fusion processing to obtain a current frame and a previous frame fused image; step four, registering to obtain a differential image of the current frame relative to the previous frame; detecting the position of a suspected target point according to the variogram; saving the position and the gray value of the suspected target point; removing false targets in the suspected target points; step seven, determining a real target according to the motion characteristics of the real target to be tracked; the recognition is finished; the invention can effectively distinguish the suspected target point and the noise on the difference image, improve the detection and identification precision of the infrared small target and effectively reduce the missing detection rate of the suspected target.

Description

Dual-band fusion target extraction method based on key point matching
Technical Field
The invention belongs to the technical field of infrared small and weak target detection of dual-band fusion, and relates to a dual-band fusion target extraction method based on key point matching.
Background
As a key technology in an infrared imaging detection system, the research of infrared small target detection has been concerned by scholars at home and abroad. A target appears on a battlefield, and from the process of formation on an infrared imager, a weak point-like small target at a long distance develops into a brighter and stable spot, and finally a larger surface target is formed. Obviously, the target is found in a long distance, and the method has decisive significance for gaining the initiative in a battlefield. In the target detection stage, the long-distance target is displayed as a small target in a point shape on the imager, which makes the detection of the target difficult. Firstly, the target is too far away, the gray scale of the target on the imager is very weak, the contrast is very small, and even the brightness of some objects in the background can be higher than the brightness of the target, so that the gray scale feature of the target is not very obvious; secondly, due to the small and large area of the target, there are few features available such as shape and texture. In addition, since the background of the target is usually very complicated, the target is contaminated by a lot of noise and clutter, making the processing of infrared targets more difficult. However, the farther the target is, the weaker the brightness of the target is, the smaller the area of the target is, the less obvious the feature is, and the greater the difficulty of detection is. Therefore, the key to infrared target detection is the detection of small infrared targets.
For the detection of infrared moving objects, the task is to detect from each frame of the image sequence the position where the moving object is present and to segment the area it occupies, extracting it as completely as possible from the background. In the research of detecting moving targets under a static background, a difference method is a more common moving target detection method, and people often use the method as one of tools for processing various target tracking problems. The method is characterized in that through difference of adjacent frames, strong correlation between adjacent frames of an image sequence is utilized to carry out change detection, and a moving object is extracted from a background. The complex infrared background not only has internal noise of the infrared detector, but also has fluctuation background clutter caused by cloud layers and the like. Background suppression techniques have been proposed in order to efficiently detect small targets of weak signals from the background described above. The classical method is to try to find the estimated parameters of the background based on the image sequence and then subtract the background estimate from the input image to obtain a signal enhanced image. And finally, correctly detecting the small target by using a threshold or sequential threshold method. Due to the motion of the infrared seeker, in the infrared imaging technology, the background and the target can be in a moving state, and the traditional background estimation and detection method can be influenced by the motion of the background.
In practical application, for some infrared images and small moving targets, the prior knowledge of the infrared images and the small moving targets cannot be obtained, and firstly, the characteristics of the targets cannot be obtained, namely, the targets cannot be detected by a modeling method; secondly, due to the limitation of infrared image imaging conditions and low signal-to-noise ratio, the small target has no obvious characteristics such as texture, structure and the like, which causes great difficulty in target detection; finally, the background follow-up causes a large disturbance to the target. In addition, in reality, medium-short wave data has complementary characteristics, short-wave images are rich in detail texture and clear in edge, imaging characteristics are close to visible light images, but targets are easily shielded and interfered by backgrounds; the background of the medium wave image is clean, the contrast between the target and the background is obvious, but the target edge is fuzzy, and the target shape is unstable.
Disclosure of Invention
The technical problem solved by the invention is as follows: the method overcomes the defects of the prior art, provides a double-waveband fusion target extraction method based on key point matching, can effectively distinguish suspected target points and noise on a difference image, improves the detection and identification precision of infrared small targets, and effectively reduces the omission factor of the suspected targets.
The technical scheme of the invention is as follows:
a double-waveband fusion target extraction method based on key point matching comprises the following steps:
the method comprises the following steps that firstly, an infrared system shoots an infrared image of an area where a target is located in real time by adopting medium waves; the infrared system adopts short waves to shoot infrared images of the target in real time;
adjusting the sizes of the medium wave infrared image of the current frame, the short wave infrared image of the current frame, the medium wave infrared image of the previous frame and the short wave infrared image of the previous frame to be uniform in size;
step three, fusing the medium wave infrared image and the short wave infrared image of the current frame to obtain a fused image of the current frame; fusing the medium wave infrared image and the short wave infrared image of the previous frame to obtain a fused image of the previous frame;
registering the current frame fused image and the previous frame fused image to obtain a differential image of the current frame relative to the previous frame;
step five, carrying out local variance calculation on the difference image of the current frame to obtain a variance map; detecting the position of a suspected target point according to the variogram; finding a gray value corresponding to the suspected target point according to the corresponding position in the current frame fused image; saving the position and the gray value of a suspected target point in the differential image of the current frame;
step six, setting a detection frame number threshold value as N; repeating the second step and the fourth step for N-1 times to obtain continuous N frame differential images; judging the position and the gray value of the same suspected target point in the N frames of continuous images; removing false targets in the suspected target points according to the judgment result;
step seven, acquiring continuous motion tracks of the remaining suspected target points according to the continuous N frames of differential images; determining a real target according to the motion characteristics of the real target to be tracked; and (5) finishing the recognition.
In the above two-waveband fusion target extraction method based on key point matching, in the first step, the medium wavelength is 3-5 μm; the short wavelength is 1.8-2.8 μm.
In the above two-band fusion target extraction method based on keypoint matching, in the second step, the adjusted image size is: the image height is 256 pixels; the image width is 256 pixels.
In the above two-band fusion target extraction method based on keypoint matching, in the fourth step, the current frame fusion image and the previous frame fusion image are registered, and the specific method for obtaining the difference image is as follows:
matching the current frame fused image with the previous frame fused image, realizing offset compensation of the motion of the target by estimating the motion parameters of the background, obtaining a stable background after motion compensation of the background between frames according to the motion parameters, and realizing the elimination of noise and noise interference.
In the above two-band fusion target extraction method based on keypoint matching, in the fifth step, the method for performing local variance calculation is as follows:
dividing the difference image into 5-by-5 pixel block matrixes which are not overlapped with each other, calculating the variance of each pixel block, traversing all pixels of the difference image and obtaining a variance map.
In the above two-band fusion target extraction method based on keypoint matching, in the sixth step, N is 20.
In the above two-band fusion target extraction method based on keypoint matching, in the sixth step, a specific method for judging the position and the gray value of the same suspected target point in the N-frame continuous images is as follows:
when the displacement change rate of the same suspected target point in two adjacent frames of images is less than 10 pixels and the gray value change is less than 10%, judging the candidate target point as a suspected point; otherwise, the target is false.
Compared with the prior art, the invention has the beneficial effects that:
(1) the method comprises the steps of obtaining an infrared image rich in fused target information by utilizing a technology of fusing medium wave and short wave infrared images, then carrying out registration between a front frame and a rear frame on the fused infrared image to obtain a corresponding difference image, then calculating local variance on the difference image to obtain a corresponding variance map, then finding a suspected target point according to variance map information to obtain a candidate target in each frame of image, and finally realizing accurate detection of a small moving target by a track continuity identification method. Under a complex infrared scene, the method provided by the invention can quickly and accurately automatically detect the infrared moving small target;
(2) according to the method, by fusing the medium-short wave data, texture information of the short wave image and the contrast between the target and the background in the medium wave image can be kept, and the fused result has the advantages of two wave band data, so that the method is more beneficial to target identification and tracking than original data;
(3) the template matching method based on the sum of squared differences provided by the invention realizes motion parameter estimation, can effectively distinguish suspected target points and noise on difference images, improves the detection and identification precision of infrared small targets, and effectively reduces the omission rate of the suspected targets.
Drawings
FIG. 1 is a flow chart of the object extraction according to the present invention.
Detailed Description
The invention is further illustrated by the following examples.
Aiming at the condition that a small target in an infrared complex environment is easily interfered by noise, background and the like and has no prior knowledge, the invention provides a double-waveband fusion infrared small target detection method based on key point matching. The method comprises the steps of firstly obtaining an infrared image rich in fused target information by utilizing a technology of fusing medium wave and short wave infrared images, then carrying out registration between a front frame and a rear frame on the fused infrared image to obtain a corresponding difference image, then calculating local variance on the difference image to obtain a corresponding variance map, then finding a suspected target point according to the variance map information to obtain a candidate target in each frame of image, and identifying a real infrared weak and small target according to the track continuity of the target. Thus, the present invention essentially comprises five main steps: fusing medium wave and short wave infrared images, registering two fused infrared images before and after registration, calculating the local variance of the difference image, finding a suspected target point according to the variance image information, and identifying track continuity.
As shown in fig. 1, the method for extracting a dual-band fusion target includes the following steps:
the method comprises the following steps that firstly, an infrared system shoots an infrared image of an area where a target is located in real time by adopting medium waves; the infrared system adopts short waves to shoot infrared images of the target in real time; the medium wavelength is 3-5 μm; the short wavelength is 1.8-2.8 μm.
Adjusting the sizes of the medium wave infrared image of the current frame, the short wave infrared image of the current frame, the medium wave infrared image of the previous frame and the short wave infrared image of the previous frame to be uniform in size; the adjusted image size is: the image height is 256 pixels; the image width is 256 pixels.
Step three, fusing the medium wave infrared image and the short wave infrared image of the current frame to obtain a fused image of the current frame; fusing the medium wave infrared image and the short wave infrared image of the previous frame to obtain a fused image of the previous frame; aiming at medium and short wave data characteristics, the short wave image has rich detail texture and clear edge, the imaging characteristics are close to visible light images, but the target is easily blocked and interfered by the background; the background of the medium wave image is clean, the contrast between the target and the background is obvious, but the target edge is fuzzy, and the target shape is unstable. The fusion of the medium wave and the short wave infrared images means that texture information of the short wave images and the contrast of a target and a background in the medium wave images are reserved in a fusion result, and the fusion result can have the advantages of two wave band data, so that the target identification is facilitated compared with original data. Since the target in the data is generally a weak target and the gray scale response reflected on the image is low, the loss of information should be avoided as much as possible in the image fusion, so a pixel level fusion algorithm is adopted. In the invention, the image fusion technology based on guide filtering is used for fusing the medium wave and the short wave infrared image.
Registering the current frame fused image and the previous frame fused image to obtain a differential image of the current frame relative to the previous frame; the specific method for registering the current frame fused image and the previous frame fused image to obtain the difference image comprises the following steps:
matching the current frame fused image with the previous frame fused image, realizing offset compensation of the motion of the target by estimating the motion parameters of the background, obtaining a stable background after motion compensation of the background between frames according to the motion parameters, and realizing the elimination of noise and noise interference.
Step five, carrying out local variance calculation on the difference image of the current frame to obtain a variance map; compared with the difference image, the variogram can further improve the signal-to-noise ratio of the target and inhibit the background. According to the information of the variogram, the variogram is segmented by using an adaptive segmentation threshold, and the positions of the suspected points can be detected through the segmentation result to be used as candidate target points. Detecting the position of a suspected target point according to the variogram; finding a gray value corresponding to the suspected target point according to the corresponding position in the current frame fused image; saving the position and the gray value of a suspected target point in the differential image of the current frame; the method for calculating the local variance comprises the following steps:
dividing the difference image into 5-by-5 pixel block matrixes which are not overlapped with each other, calculating the variance of each pixel block, traversing all pixels of the difference image and obtaining a variance map.
Step six, setting a detection frame number threshold value as N; n is 20. Repeating the second step and the fourth step for N-1 times to obtain continuous N frame differential images; judging the position and the gray value of the same suspected target point in the N frames of continuous images; removing false targets in the suspected target points according to the judgment result; the specific method for judging the position and the gray value of the same suspected target point in the N frames of continuous images comprises the following steps:
when the displacement change rate of the same suspected target point in two adjacent frames of images is less than 10 pixels and the gray value change is less than 10%, judging the candidate target point as a suspected point; otherwise, the target is false. I.e. the change over two consecutive frames is within a certain range. The selection of the threshold is set according to the specific state parameters of the current target, and the speed change, the area change and the gray level change of the weak and small moving target in two continuous frames meet the threshold: the speed is related to the position of the target in two frames before and after, the area is related to the number of pixels of the target, and the gray scale is related to the average pixel value in the target area. If a certain point in the first frame can find a relevant matching point in the next frame, the information of the candidate point is retained, otherwise, the candidate point is discarded; if a certain candidate point in the second frame is not matched by a point in the first frame, adding a point set to serve as an initial point of next matching; according to the matching method, a new initial set of matching points is obtained, and the set comprises candidate target points with track continuity.
Step seven, acquiring continuous motion tracks of the remaining suspected target points according to the continuous N frames of differential images; determining a real target according to the motion characteristics of the real target to be tracked; and (5) finishing the recognition.
The invention provides a method for detecting a small moving target based on double-waveband fusion of key point matching, which aims to detect the small moving target under the conditions of a complex infrared scene and no prior knowledge. The method comprises the steps of firstly obtaining an infrared image rich in fused target information by utilizing a technology of fusing medium wave and short wave infrared images, then carrying out registration between a front frame and a rear frame on the fused infrared image to obtain a corresponding difference image, then calculating a local variance on the difference image to obtain a corresponding variance map, then finding a suspected target point according to variance map information to obtain a candidate target in each frame of image, and finally realizing accurate detection of a small moving target by a track continuity identification method. Under a complex infrared scene, the method provided by the invention can quickly and accurately automatically detect the infrared moving small target.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to limit the present invention, and those skilled in the art can make variations and modifications of the present invention without departing from the spirit and scope of the present invention by using the methods and technical contents disclosed above.

Claims (7)

1. A double-waveband fusion target extraction method based on key point matching is characterized in that: the method comprises the following steps:
the method comprises the following steps that firstly, an infrared system shoots an infrared image of an area where a target is located in real time by adopting medium waves; the infrared system adopts short waves to shoot infrared images of the target in real time;
adjusting the sizes of the medium wave infrared image of the current frame, the short wave infrared image of the current frame, the medium wave infrared image of the previous frame and the short wave infrared image of the previous frame to be uniform in size;
step three, fusing the medium wave infrared image and the short wave infrared image of the current frame to obtain a fused image of the current frame; fusing the medium wave infrared image and the short wave infrared image of the previous frame to obtain a fused image of the previous frame;
registering the current frame fused image and the previous frame fused image to obtain a differential image of the current frame relative to the previous frame;
step five, carrying out local variance calculation on the difference image of the current frame to obtain a variance map; detecting the position of a suspected target point according to the variogram; finding a gray value corresponding to the suspected target point according to the corresponding position in the current frame fused image; saving the position and the gray value of a suspected target point in the differential image of the current frame;
step six, setting a detection frame number threshold value as N; repeating the second step and the fourth step for N-1 times to obtain continuous N frame differential images; judging the position and the gray value of the same suspected target point in the N frames of continuous images; removing false targets in the suspected target points according to the judgment result;
step seven, acquiring continuous motion tracks of the remaining suspected target points according to the continuous N frames of differential images; determining a real target according to the motion characteristics of the real target to be tracked; and (5) finishing the recognition.
2. The method for extracting the dual-band fusion target based on the keypoint matching as claimed in claim 1, wherein: in the first step, the medium wavelength is 3-5 μm; the short wavelength is 1.8-2.8 μm.
3. The method of claim 2 for extracting a dual-band fusion target based on keypoint matching, comprising: in the second step, the adjusted image size is as follows: the image height is 256 pixels; the image width is 256 pixels.
4. The method of claim 3 for extracting a dual-band fusion target based on keypoint matching, comprising: in the fourth step, the specific method for obtaining the difference image by registering the current frame fused image and the previous frame fused image comprises the following steps:
matching the current frame fused image with the previous frame fused image, realizing offset compensation of the motion of the target by estimating the motion parameters of the background, obtaining a stable background after motion compensation of the background between frames according to the motion parameters, and realizing the elimination of noise and noise interference.
5. The method of claim 4 for extracting a dual-band fusion target based on keypoint matching, comprising: in the fifth step, the method for calculating the local variance comprises the following steps:
dividing the difference image into 5-by-5 pixel block matrixes which are not overlapped with each other, calculating the variance of each pixel block, traversing all pixels of the difference image and obtaining a variance map.
6. The method of claim 5 for extracting a dual-band fusion target based on keypoint matching, comprising: in the sixth step, N is 20.
7. The method of claim 6 for extracting a dual-band fusion target based on keypoint matching, comprising: in the sixth step, the specific method for judging the position and the gray value of the same suspected target point in the N frames of continuous images is as follows:
when the displacement change rate of the same suspected target point in two adjacent frames of images is less than 10 pixels and the gray value change is less than 10%, judging the candidate target point as a suspected point; otherwise, the target is false.
CN201911313247.8A 2019-12-19 2019-12-19 Dual-band fusion target extraction method based on key point matching Pending CN111161308A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911313247.8A CN111161308A (en) 2019-12-19 2019-12-19 Dual-band fusion target extraction method based on key point matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911313247.8A CN111161308A (en) 2019-12-19 2019-12-19 Dual-band fusion target extraction method based on key point matching

Publications (1)

Publication Number Publication Date
CN111161308A true CN111161308A (en) 2020-05-15

Family

ID=70557351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911313247.8A Pending CN111161308A (en) 2019-12-19 2019-12-19 Dual-band fusion target extraction method based on key point matching

Country Status (1)

Country Link
CN (1) CN111161308A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205470A (en) * 2021-03-19 2021-08-03 昆明物理研究所 Infrared medium-short wave double-color fusion method based on hue saturation mapping
CN113608178A (en) * 2021-07-30 2021-11-05 上海无线电设备研究所 Anti-drag deception jamming method based on dual-band information fusion
CN114459298A (en) * 2022-02-25 2022-05-10 西安恒宇众科空间技术有限公司 Miniature missile-borne active laser seeker and guiding method thereof

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1932882A (en) * 2006-10-19 2007-03-21 上海交通大学 Infared and visible light sequential image feature level fusing method based on target detection
US8768101B1 (en) * 2010-06-30 2014-07-01 The United States Of America As Represented By The Secretary Of The Air Force Target image registration and fusion
CN105447888A (en) * 2015-11-16 2016-03-30 中国航天时代电子公司 Unmanned plane maneuvering target detection method detecting based on effective target
CN106096604A (en) * 2016-06-02 2016-11-09 西安电子科技大学昆山创新研究院 Multi-spectrum fusion detection method based on unmanned platform
CN106288632A (en) * 2015-05-15 2017-01-04 青岛海信医疗设备股份有限公司 The rendering method of sample in medical treatment refrigerator system
WO2017041335A1 (en) * 2015-09-07 2017-03-16 南京华图信息技术有限公司 Device and method for collaborative moving target detection with imaging and spectrogram detection in full optical waveband
CN107067416A (en) * 2017-05-11 2017-08-18 南宁市正祥科技有限公司 A kind of detection method of moving target
CN108364277A (en) * 2017-12-20 2018-08-03 南昌航空大学 A kind of infrared small target detection method of two-hand infrared image fusion
CN109993052A (en) * 2018-12-26 2019-07-09 上海航天控制技术研究所 The method for tracking target and system of dimension self-adaption under a kind of complex scene

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1932882A (en) * 2006-10-19 2007-03-21 上海交通大学 Infared and visible light sequential image feature level fusing method based on target detection
US8768101B1 (en) * 2010-06-30 2014-07-01 The United States Of America As Represented By The Secretary Of The Air Force Target image registration and fusion
CN106288632A (en) * 2015-05-15 2017-01-04 青岛海信医疗设备股份有限公司 The rendering method of sample in medical treatment refrigerator system
WO2017041335A1 (en) * 2015-09-07 2017-03-16 南京华图信息技术有限公司 Device and method for collaborative moving target detection with imaging and spectrogram detection in full optical waveband
CN105447888A (en) * 2015-11-16 2016-03-30 中国航天时代电子公司 Unmanned plane maneuvering target detection method detecting based on effective target
CN106096604A (en) * 2016-06-02 2016-11-09 西安电子科技大学昆山创新研究院 Multi-spectrum fusion detection method based on unmanned platform
CN107067416A (en) * 2017-05-11 2017-08-18 南宁市正祥科技有限公司 A kind of detection method of moving target
CN108364277A (en) * 2017-12-20 2018-08-03 南昌航空大学 A kind of infrared small target detection method of two-hand infrared image fusion
CN109993052A (en) * 2018-12-26 2019-07-09 上海航天控制技术研究所 The method for tracking target and system of dimension self-adaption under a kind of complex scene

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李英杰;张俊举;常本康;钱芸生;刘磊;: "远距离多波段红外图像融合***及配准方法" *
马治国等: "基于红外双波段图像融合的点目标识别" *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205470A (en) * 2021-03-19 2021-08-03 昆明物理研究所 Infrared medium-short wave double-color fusion method based on hue saturation mapping
CN113205470B (en) * 2021-03-19 2022-08-30 昆明物理研究所 Infrared medium-short wave double-color fusion method based on hue saturation mapping
CN113608178A (en) * 2021-07-30 2021-11-05 上海无线电设备研究所 Anti-drag deception jamming method based on dual-band information fusion
CN113608178B (en) * 2021-07-30 2024-01-02 上海无线电设备研究所 Anti-drag deception jamming method based on dual-band information fusion
CN114459298A (en) * 2022-02-25 2022-05-10 西安恒宇众科空间技术有限公司 Miniature missile-borne active laser seeker and guiding method thereof
CN114459298B (en) * 2022-02-25 2024-03-01 西安恒宇众科空间技术有限公司 Miniature missile-borne active laser guide head and guide method thereof

Similar Documents

Publication Publication Date Title
CN108805904B (en) Moving ship detection and tracking method based on satellite sequence image
CN109345472B (en) Infrared moving small target detection method for complex scene
CN109978851B (en) Method for detecting and tracking small and medium moving target in air by using infrared video
CN107767400B (en) Remote sensing image sequence moving target detection method based on hierarchical significance analysis
CN109086724B (en) Accelerated human face detection method and storage medium
CN110728697A (en) Infrared dim target detection tracking method based on convolutional neural network
CN111161308A (en) Dual-band fusion target extraction method based on key point matching
CN110490904B (en) Weak and small target detection and tracking method
CN104463911A (en) Small infrared moving target detection method based on complicated background estimation
CN110866545A (en) Method and system for automatically identifying pipeline target in ground penetrating radar data
CN109711256B (en) Low-altitude complex background unmanned aerial vehicle target detection method
CN110555868A (en) method for detecting small moving target under complex ground background
CN111208479B (en) Method for reducing false alarm probability in deep network detection
Lian et al. A novel method on moving-objects detection based on background subtraction and three frames differencing
CN111369570B (en) Multi-target detection tracking method for video image
CN110400294B (en) Infrared target detection system and detection method
CN115375733A (en) Snow vehicle sled three-dimensional sliding track extraction method based on videos and point cloud data
CN116229359A (en) Smoke identification method based on improved classical optical flow method model
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
CN115035378A (en) Method and device for detecting infrared dim target based on time-space domain feature fusion
CN113205494B (en) Infrared small target detection method and system based on adaptive scale image block weighting difference measurement
CN112288780B (en) Multi-feature dynamically weighted target tracking algorithm
CN115797374B (en) Airport runway extraction method based on image processing
CN112070804A (en) Moving target detection method based on TOF camera
CN109784229B (en) Composite identification method for ground building data fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200515