CN110765853A - Image processing method of multi-spectrum phase machine - Google Patents
Image processing method of multi-spectrum phase machine Download PDFInfo
- Publication number
- CN110765853A CN110765853A CN201910850343.XA CN201910850343A CN110765853A CN 110765853 A CN110765853 A CN 110765853A CN 201910850343 A CN201910850343 A CN 201910850343A CN 110765853 A CN110765853 A CN 110765853A
- Authority
- CN
- China
- Prior art keywords
- target
- image
- channel
- detection
- spectral
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/10—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
Abstract
The invention discloses an image processing method of a multi-spectral phase segmentation machine. Firstly, all channel images of the multi-spectral-band cameras are obtained and cached, a reference spectral band of one camera is extracted, and the reference spectral band is rapidly detected and processed to find the position of a target in the image. And calculating the possible positions and areas of the target in the other two spectral bands, and extracting neighborhood slices of the suspicious target position in all the spectral band images in sequence according to the positions. And performing fine detection on all spectrum neighborhood slices, and extracting the characteristics of each spectrum target. And fusing the features to construct a target feature vector, performing target association and identification among multiple frames by using the target feature vector, and outputting the associated target feature vector and an identification result as image processing. By quickly processing and positioning the target of the reference spectrum, the method reduces the processing complexity and resource requirements of the multi-spectrum camera image, and obviously improves the efficiency and speed of target detection.
Description
The technical field is as follows:
the invention belongs to the technical field of image processing, mainly aims at a small target detection algorithm in a low-pair signal-to-noise ratio image, and is particularly suitable for performing rapid high-precision detection on an aerial target under a complex background in a multispectral remote sensing image.
Background art:
the current small target detection method is basically proposed for images obtained by single-spectrum (medium wave, long wave or visible) detection, but is influenced by optical diffraction limit, under the condition of the same detection distance, the resolution of the images formed by detection means of different wave bands is different, and the longer the wavelength is, the lower the resolution is. Therefore, the characteristics of the target and the background in the single-spectrum infrared image are less, so that high false alarm rate and low detection rate are often caused, meanwhile, the visible light image reflects the reflection characteristic of the target, the imaging resolution is high, and the characteristics of the outline, the color, the texture and the like of the aerial target can be clearly seen. In summary, the target detection method with multi-spectral-band fusion is adopted, and the target is accurately detected with high feasibility by using the difference and complementarity of the target and background characteristics in different bands.
According to the difference of the layers where fusion occurs in image processing, the method can be divided into a data layer, a feature layer and a decision layer, wherein the data layer fusion refers to fusion of original data from different wave bands, and then the target is detected based on the fused data; the feature layer fusion is to extract features of images called by different wave bands respectively, perform fusion processing on the extracted features, and then perform target detection according to the feature fusion result; the decision layer fusion is to independently detect the images of each wave band and then fuse the detection results to obtain the final detection result. When a fusion detection algorithm is designed, a fusion structure plays an important role, and can be divided into four types of centralized type, serial type, parallel type and to-be-fed back type.
However, no matter what fusion strategy and structure are adopted, the total image area processing is respectively carried out on each spectral band image, so that the calculated amount is multiplied, particularly in the process of processing a visible light image, the number of pixels imaged in the same field range is often 4-10 times that of infrared images, meanwhile, the number of target features processed by a visible detection algorithm is more and the dimensionality is high, the target area is low, the detection algorithm is time-consuming and redundant, and the detection effect can be ensured but the real-time performance cannot be met. Therefore, it is necessary to provide a fast fusion detection method to meet the requirement of real-time or quasi-real-time for the image processing of multiple spectral bands.
The invention content is as follows:
in order to overcome the defects of the prior art, the invention provides a rapid high-precision detection method of small targets, which is mainly characterized by rapid search and precise detection. The method adopts an algorithm with simple structure and less computation to quickly detect the target in the reference wave band to obtain the suspicious position of the target, then carries out coordinate conversion to guide the extraction of the target slice in the image with higher resolution, then extracts the target characteristics in the slice, and combines the target characteristics obtained in the image of the reference wave band to carry out quick high-precision target detection.
The above purpose of the invention is realized by the following technical scheme:
1. an image processing method of a multi-spectral fragment machine is characterized in that: the method comprises the following steps:
(1) firstly, acquiring a plurality of spectral band channel image data output by a multispectral camera, and recording the data as I1、I2…InWherein n is a camera channel number, and independent caching is carried out according to a frame unit;
(2) one of the spectrum channels is arbitrarily selected as a reference channel: reading out image data from the buffer memory according to frames, carrying out scale transformation on the image data, and carrying out transformation on a transformed channel according to the minimum image scale;
(3) carrying out rapid detection and target extraction on the reference channel image by adopting a detection method of maximum median filtering, morphological filtering or a local contrast algorithm;
(4) performing coordinate transformation calculation on the target position detected by the reference channel image in the step (3), calculating the position of the target in the rest spectrum channels according to the optical center registration and scale relation among different channels, and calculating according to the following formula
Wherein xb,ybIs a certain target coordinate position in the reference image, Sxi,Syi,Txi,TyiManually carrying out sight calibration calculation in advance on the coordinate scale and the offset transformation parameter of each channel image coordinate and the reference image, wherein i is a channel number; output xi,yiThe coordinate position of the target in the ith channel image after transformation. By finding the coordinate position of each target in the reference image in other channels, in xi,yiIs a center, WiFor window size, the object is acquired for image slices at each channel, where WiIs set manually.
(5) Performing fine target detection and feature extraction on the slices of all spectral channels of each target, wherein the fine target detection is to perform target detection by using methods such as space-time correlation and the like and perform target detection through multi-scale gradient difference, two-dimensional minimum mean square error, a support vector machine or a deep neural network; the characteristic extraction is to extract typical description of a target from a slice image, and the available characteristics are the position, peak intensity, average intensity, signal-to-noise ratio, shape, area and background texture of the target;
(6) fusing the multi-spectral-segment channel characteristics of each target, wherein target position scale conversion is required before fusion, and conversion is carried out according to the channel with the highest resolution of the camera; fusing the characteristics after the change is completed, wherein the change formula is as follows
Wherein F is a fused target feature vector FiFor the feature vector of each channel, WiThe weight is obtained by manual setting or automatic generation and calculation according to the signal to noise ratio;
(7) performing multi-frame association and target identification on a multi-spectrum-segment channel of each target, wherein the multi-frame association refers to comparing a currently detected target with a target detected by a previous frame, if the difference of target characteristics of the frames before and after a preset neighborhood is less than a threshold value, the same target is considered, and if continuous multi-frame association is performed, the target can be confirmed; the target identification means that the characteristic vector of a target is compared with a plurality of preset target templates, the target type with the minimum difference is selected for distribution, and meanwhile, the difference percentage is used as an identification confidence coefficient.
Compared with the prior art, the invention has the beneficial effects that
1. By selecting the reference spectrum, the processing complexity and resource requirements of the multi-spectrum phase camera image are reduced, the efficiency and speed of target detection are obviously improved, the processing architecture is simple, and hardware implementation is facilitated.
2. The detection precision and the processing performance of the target are improved through processing of slice extraction, fine detection, feature fusion and the like of a plurality of spectral bands.
Drawings
FIG. 1 is a block diagram of an implementation flow of the present invention;
FIG. 2 is a long-wave image and its slice results of an embodiment of the present invention, wherein (1) is the original long-wave image and (2) is the extracted target slice;
FIG. 3 is a diagram of a wavelet image and its slicing results according to an embodiment of the present invention, wherein (1) is an original wavelet image and (2) is an extracted target slice;
FIG. 4 is a visual image and its slicing results of an example of the present invention, wherein (1) is the original visual image and (2) is the extracted target slice;
FIG. 5 is a result of multi-spectral image object processing according to an embodiment of the present invention.
Detailed Description
Technical solutions in the embodiments of the present invention will be described in detail below with reference to the drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Images of three channels, long-wave infrared, medium-wave infrared and visible, are acquired as shown in fig. 2. And (8) selecting a long-wave infrared channel as a reference channel, detecting the reference channel by using a maximum median method, and detecting a target position (69,478). And obtaining the coordinates (217,863) of the target in the medium wave infrared channel and the coordinates (406,4780) of the target in the visible channel according to the three-channel position relation and the resolution relation. The slices of the three channels are respectively taken, the sizes of the slices are respectively 9 × 9,9 × 9 and 21 × 21, as shown in fig. 3, fine detection and positioning are carried out through multi-scale local contrast, and feature extraction including peak intensity, average intensity, signal-to-noise ratio, and shape and area of the target is completed. Then, weighting and fusing are carried out by weight distribution of 0.4 long wave and 0.3 medium wave, visible 0.3 to obtain the target characteristics. The same detection is carried out on subsequent frames, if the targets appear, the targets are confirmed to be true targets, the targets are compared and identified by using the airplane template, the detection and identification result is shown in fig. 5, the target confirmation coordinate is (403,4779), and the target confidence is 87.7%.
Claims (3)
1. The image processing method of a multispectral camera, the said multispectral camera refers to the camera that the shared optical system utilizes light splitting to form different spectral band channels or independent optical system but the imaging visual field of each system has more than 60% overlap coverage; the method is characterized by comprising the following steps:
(1) firstly, acquiring a plurality of spectral band channel image data output by a multispectral camera, and recording the data as I1、I2…InWherein n is a camera channel number, and independent caching is carried out according to a frame unit;
(2) one of the spectrum channels is arbitrarily selected as a reference channel: reading out image data from the buffer memory according to frames, carrying out scale transformation on the image data, and carrying out transformation on a transformed channel according to the minimum image scale;
(3) carrying out rapid detection and target extraction on the reference channel image by adopting a detection method of maximum median filtering, morphological filtering or a local contrast algorithm;
(4) performing coordinate transformation calculation on the target position detected by the reference channel image in the step (3), calculating the position of the target in the rest spectrum channels according to the optical center registration and scale relation among different channels, and calculating according to the following formula
Wherein xb,ybIs a certain target coordinate position in the reference image, Sxi,Syi,Txi,TyiManually carrying out sight calibration calculation in advance on the coordinate scale and the offset transformation parameter of each channel image coordinate and the reference image, wherein i is a channel number; output xi,yiThe coordinate position of the target in the ith channel image after transformation. By finding the coordinate position of each target in the reference image in other channels, in xi,yiIs a center, WiFor window size, the object is acquired for image slices at each channel, where WiIs set manually;
(5) performing fine target detection and feature extraction on the slices of all spectral channels of each target, wherein the fine target detection is to perform target detection by using methods such as space-time correlation and the like and perform target detection through multi-scale gradient difference, two-dimensional minimum mean square error, a support vector machine or a deep neural network; the characteristic extraction is to extract typical description of a target from a slice image, and the available characteristics are the position, peak intensity, average intensity, signal-to-noise ratio, shape, area and background texture of the target;
(6) fusing the multi-spectral-segment channel characteristics of each target, wherein target position scale conversion is required before fusion, and conversion is carried out according to the channel with the highest resolution of the camera; fusing the characteristics after the change is completed, wherein the change formula is as follows
Wherein F is a fused target feature vector FiFor the feature vector of each channel, WiThe weight is obtained by manual setting or automatic generation and calculation according to the signal to noise ratio;
(7) performing multi-frame association and target identification on a multi-spectrum-segment channel of each target, wherein the multi-frame association refers to comparing a currently detected target with a target detected by a previous frame, if the difference of target characteristics of the frames before and after a preset neighborhood is less than a threshold value, the same target is considered, and if continuous multi-frame association is performed, the target can be confirmed; the target identification means that the characteristic vector of a target is compared with a plurality of preset target templates, the target type with the minimum difference is selected for distribution, and meanwhile, the difference percentage is used as an identification confidence coefficient.
2. The image processing method of the multi-spectral fragment camera according to claim 1, characterized in that: the fine target detection in step (5) is not identical to the fine detection method adopted for different spectral band channels.
3. The image processing method of the multi-spectral fragment camera according to claim 1, characterized in that: and (4) the target recognition in the step (7) is realized by adopting an unsupervised learning method to train recognition classification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910850343.XA CN110765853B (en) | 2019-09-10 | 2019-09-10 | Image processing method of multispectral camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910850343.XA CN110765853B (en) | 2019-09-10 | 2019-09-10 | Image processing method of multispectral camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110765853A true CN110765853A (en) | 2020-02-07 |
CN110765853B CN110765853B (en) | 2023-05-05 |
Family
ID=69329822
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910850343.XA Active CN110765853B (en) | 2019-09-10 | 2019-09-10 | Image processing method of multispectral camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110765853B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111582100A (en) * | 2020-04-28 | 2020-08-25 | 浙江大华技术股份有限公司 | Target object detection method and device |
CN112946684A (en) * | 2021-01-28 | 2021-06-11 | 浙江大学 | Electromagnetic remote sensing intelligent imaging system and method based on assistance of optical target information |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102176066A (en) * | 2011-01-24 | 2011-09-07 | 南京理工大学 | Target optimal detection spectral coverage imaging detecting method based on narrow band scanning |
WO2017041335A1 (en) * | 2015-09-07 | 2017-03-16 | 南京华图信息技术有限公司 | Device and method for collaborative moving target detection with imaging and spectrogram detection in full optical waveband |
CN108564587A (en) * | 2018-03-07 | 2018-09-21 | 浙江大学 | A kind of a wide range of remote sensing image semantic segmentation method based on full convolutional neural networks |
CN109271921A (en) * | 2018-09-12 | 2019-01-25 | 合刃科技(武汉)有限公司 | A kind of intelligent identification Method and system of multispectral imaging |
-
2019
- 2019-09-10 CN CN201910850343.XA patent/CN110765853B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102176066A (en) * | 2011-01-24 | 2011-09-07 | 南京理工大学 | Target optimal detection spectral coverage imaging detecting method based on narrow band scanning |
WO2017041335A1 (en) * | 2015-09-07 | 2017-03-16 | 南京华图信息技术有限公司 | Device and method for collaborative moving target detection with imaging and spectrogram detection in full optical waveband |
CN108564587A (en) * | 2018-03-07 | 2018-09-21 | 浙江大学 | A kind of a wide range of remote sensing image semantic segmentation method based on full convolutional neural networks |
CN109271921A (en) * | 2018-09-12 | 2019-01-25 | 合刃科技(武汉)有限公司 | A kind of intelligent identification Method and system of multispectral imaging |
Non-Patent Citations (2)
Title |
---|
XIAONING HU等: "Large Format High SNR SWIR HgCdTe/Si FPA With Multiple-choice Gain for Hyperspectral Detection" * |
杨洪飞等: "图像融合在空间目标三维重建中的应用" * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111582100A (en) * | 2020-04-28 | 2020-08-25 | 浙江大华技术股份有限公司 | Target object detection method and device |
CN111582100B (en) * | 2020-04-28 | 2023-04-28 | 浙江大华技术股份有限公司 | Target object detection method and device |
CN112946684A (en) * | 2021-01-28 | 2021-06-11 | 浙江大学 | Electromagnetic remote sensing intelligent imaging system and method based on assistance of optical target information |
CN112946684B (en) * | 2021-01-28 | 2023-08-11 | 浙江大学 | Electromagnetic remote sensing intelligent imaging system and method based on optical target information assistance |
Also Published As
Publication number | Publication date |
---|---|
CN110765853B (en) | 2023-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111462128B (en) | Pixel-level image segmentation system and method based on multi-mode spectrum image | |
CN110197231B (en) | Bird condition detection equipment and identification method based on visible light and infrared light image fusion | |
CN112288008B (en) | Mosaic multispectral image disguised target detection method based on deep learning | |
CN107239759B (en) | High-spatial-resolution remote sensing image transfer learning method based on depth features | |
CN111784642A (en) | Image processing method, target recognition model training method and target recognition method | |
CN108960404B (en) | Image-based crowd counting method and device | |
CN103927741A (en) | SAR image synthesis method for enhancing target characteristics | |
CN109635814B (en) | Forest fire automatic detection method and device based on deep neural network | |
CN110765853B (en) | Image processing method of multispectral camera | |
CN109859246B (en) | Low-altitude slow unmanned aerial vehicle tracking method combining correlation filtering and visual saliency | |
CN109559324A (en) | A kind of objective contour detection method in linear array images | |
CN112308156B (en) | Two-stage image change detection method based on counterstudy | |
KR101866676B1 (en) | Apparatus and Method for identifying object using multi spectral images | |
CN114758219B (en) | Trace identification method based on spectral data and infrared temperature data fusion | |
CN112907527B (en) | Infrared thermal imaging splicing detection method for large-size curved surface test piece | |
CN113744191A (en) | Automatic cloud detection method for satellite remote sensing image | |
CN108510544B (en) | Light strip positioning method based on feature clustering | |
CN117409339A (en) | Unmanned aerial vehicle crop state visual identification method for air-ground coordination | |
Yu et al. | Three-channel infrared imaging for object detection in haze | |
CN115240089A (en) | Vehicle detection method of aerial remote sensing image | |
CN113223065B (en) | Automatic matching method for SAR satellite image and optical image | |
CN117409244A (en) | SCKConv multi-scale feature fusion enhanced low-illumination small target detection method | |
Wu et al. | Research on crack detection algorithm of asphalt pavement | |
CN112800942A (en) | Pedestrian detection method based on self-calibration convolutional network | |
CN114581315B (en) | Low-visibility approach flight multi-mode monitoring image enhancement method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |