CN109784229B - Composite identification method for ground building data fusion - Google Patents

Composite identification method for ground building data fusion Download PDF

Info

Publication number
CN109784229B
CN109784229B CN201811630977.6A CN201811630977A CN109784229B CN 109784229 B CN109784229 B CN 109784229B CN 201811630977 A CN201811630977 A CN 201811630977A CN 109784229 B CN109784229 B CN 109784229B
Authority
CN
China
Prior art keywords
image
area
edge
laser
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811630977.6A
Other languages
Chinese (zh)
Other versions
CN109784229A (en
Inventor
张天序
涂直健
桑红石
刘羽丰
姜庆峰
李玉涛
姜鹏
付宏明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Wuhan Institute of Technology
Original Assignee
Huazhong University of Science and Technology
Wuhan Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology, Wuhan Institute of Technology filed Critical Huazhong University of Science and Technology
Priority to CN201811630977.6A priority Critical patent/CN109784229B/en
Publication of CN109784229A publication Critical patent/CN109784229A/en
Application granted granted Critical
Publication of CN109784229B publication Critical patent/CN109784229B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a composite identification method for data fusion of a ground building, which comprises the following steps: respectively obtaining an infrared image and a laser image of a ground building; carrying out mathematical morphology preprocessing and image segmentation on the infrared image in sequence to obtain a plurality of infrared image areas; obtaining the image edge of the infrared image in the field range of the laser detector, and obtaining a plurality of edge segmentation areas according to the image edge mark; sequentially preprocessing and extracting regions of the laser image to obtain a plurality of laser image regions; performing data fusion on the infrared image area, the edge segmentation area and the laser image area to obtain one or more candidate areas; and extracting the regional characteristics of each candidate region, and matching the extracted regional characteristics with the regional characteristics of the building target, thereby identifying the target region where the building target is located from the candidate regions. The invention can effectively improve the identification accuracy of the ground buildings.

Description

Composite identification method for ground building data fusion
Technical Field
The invention belongs to the field of imaging automatic target recognition, and particularly relates to a composite recognition method for ground building data fusion.
Background
Ground buildings are important artificial ground targets, and a great deal of demands are placed on identification of the building targets in the fields of aircraft navigation, collision avoidance, accurate guided weapon terminal guidance and the like. The buildings are usually located in complex ground backgrounds, the complex ground backgrounds greatly increase the difficulty in identifying the building targets, and the problems that the target characteristics are not obvious and the like are caused, so that the direct identification effect of a single-mode infrared detection mode on the ground building targets is poor.
The laser active and infrared passive dual-mode composite identification mode combines infrared passive detection and laser active detection, two single-mode detection technologies can complement each other in advantages, laser can provide target scene three-dimensional information which cannot be provided by infrared, more abundant target information is obtained, identification capability is improved, and detection false alarm rate is reduced. For the identification of the laser infrared composite target, some scholars have conducted related research, and the laser imaging is integrated into the infrared imaging target positioning in the laser infrared composite ground building identification and navigation method (application number: CN201410844242), so that the matching accuracy is improved by effectively integrating the laser infrared significant features to form matching elements according to the characteristics that the infrared image can better reflect the gray level difference between the target and the background and the three-dimensional distance image of the laser image containing the geometric intrinsic information of the target can better reflect the characteristics of the shape feature of the target, optimizing and selecting the infrared imaging target area, the background contrast feature and the shape feature of the laser imaging target area. In the 'laser and infrared fused target detection' (infrared and laser engineering, Vol.47No.8Aug 2018), the distance information measured by a laser radar is introduced into a DSBM algorithm to eliminate false alarms, so that the target identification probability is improved. The method comprises the steps of firstly registering a laser range profile and an infrared image, detecting the infrared image to obtain an infrared detection result, then determining the actual size of each detection result of the infrared image according to the range information obtained by a laser sensor, and finally eliminating false alarms which do not accord with the size according to target priori knowledge, such as the actual size of ground moving targets, such as cars, armored cars and tanks, to obtain a target identification result. The method for identifying the infrared and laser fusion target (infrared and laser engineering, Vol.47No.5May 2018) comprises the steps of respectively extracting wavelet moment features of an infrared image and projection profile features of laser point cloud data, combining the wavelet moment features and the projection profile features of the laser point cloud data into a high-dimensional feature vector, constructing a differential combination classifier by adopting reduction algorithms based on 3 different viewpoints, reducing the dimension of the features, reducing the complexity of calculation, carrying out fusion identification by utilizing the complementarity between different reductions, and improving the precision and the robustness of automatic target identification.
The technology can realize the identification of the target by extracting different characteristics of the infrared image and the laser image to carry out laser infrared composite identification, but when the material of the surface of the target is different, the contrast between partial area in the infrared image and the background is low, so that the target information in the infrared image is lost; target information in the laser image is lost due to low signal-to-noise ratio of a target echo signal in the laser image, defects of an image preprocessing algorithm and the like. Under a complex background, the condition of target information loss in the infrared image or the laser image is more serious, and the similarity between the features extracted by the laser infrared composite identification method and the true values of the corresponding target features is lower, so that the target identification accuracy cannot be guaranteed.
Disclosure of Invention
In view of the defects and the improvement requirements of the prior art, the invention provides a composite identification method for ground building data fusion, which aims to improve the identification accuracy of ground buildings.
In order to achieve the above object, the present invention provides a composite identification method for ground building data fusion, comprising:
(1) respectively obtaining an infrared image and a laser image of a ground building;
(2) carrying out mathematical morphology preprocessing and image segmentation on the infrared image in sequence to obtain a plurality of infrared image areas;
(3) obtaining the image edge of the infrared image in the field range of the laser detector, and obtaining a plurality of edge segmentation areas according to the image edge mark;
(4) sequentially preprocessing and extracting regions of the laser image to obtain a plurality of laser image regions;
(5) performing data fusion on the infrared image area, the edge segmentation area and the laser image area to obtain one or more candidate areas;
(6) extracting the regional characteristics of each candidate region, and matching the extracted regional characteristics with the regional characteristics of the building target, so as to identify the target region where the building target is located from the candidate regions;
the laser detector is used for acquiring laser images.
By fusing the infrared image area and the edge segmentation area obtained from the infrared image of the ground building and the laser image area obtained from the laser image of the ground building, the target area and the background with low contrast in the infrared image can be distinguished by using the distance information of the laser image, and meanwhile, a small amount of target areas lost due to noise influence in the laser image are supplemented by using the edge information of the infrared image, so that the candidate areas obtained by fusion are more complete, the area characteristics of the candidate areas have higher confidence, and further, the area characteristics of the candidate areas are matched with the area characteristics of the building target to achieve higher accuracy in the composite identification of the ground building.
Further, in the step (2), the infrared image is preprocessed, which includes:
using structural elements SE having a size smaller than the size of the building target image1Performing morphological opening operation on the infrared image to perform background suppression and reserve a target area, thereby obtaining a first background suppression image;
using structural elements SE of a size greater than the size of the building target image2Performing morphological open operation on the infrared image to perform background suppression and suppress a target area so as to obtain a second background suppression image;
subtracting the first background suppression image from the second background suppression image to protrude the target area from the background, and setting the gray value of the pixel with the gray value smaller than 0 in the subtraction result as 0, thereby obtaining a third background suppression image;
using structural elements SE having a size smaller than the size of the building target image3And performing morphological opening operation on the third background suppression image to remove burrs at the edge of the area of the third background suppression image, thereby obtaining an infrared preprocessing image.
Further, the step (3) comprises:
extracting image edge of infrared image in the field range of laser detector, and utilizing structure element SE4Performing a morphological dilation operation on the extracted image edges to connect fracture edges therein;
according to the image edge, marking non-edge pixels of the infrared image so as to obtain a plurality of edge segmentation areas formed by segmenting the image edge.
Further, the step (5) comprises:
for any one of the edge divided regions Z, the similarity sim between the edge divided region Z and the infrared image region at the same position is calculated respectively1And the similarity sim between the edge division region Z and the laser image region at the same position2(ii) a If sim2>T1Or T is2≤sim2≤T1And sim1>T1Then, the edge segmentation area Z is reserved; otherwise, removeAn edge segmentation region Z;
traversing each edge segmentation area to remove part of the edge segmentation area;
removing edge pixels in the image edge which are not adjacent to the reserved edge segmentation area, so as to obtain one or more candidate areas consisting of the reserved edge segmentation area and the adjacent edge pixels;
wherein, T1And T2Are all preset similarity thresholds, T1>T2
Furthermore, the similarity between the edge segmentation region and the infrared image region at the same position, or the similarity between the edge segmentation region and the laser image region at the same position, is calculated by:
obtaining the pixel area S of the edge segmentation region1And obtaining the pixel area S of the infrared image area or the laser image area at the same position in the divided area2According to the pixel area S1And S2The similarity between the regions is calculated as:
Figure BDA0001928992150000051
further, the step (6) comprises:
obtaining the regional characteristics of a building target, and extracting the regional characteristics of each candidate region;
for any candidate area C, with the area characteristics of the building target as reference, respectively obtaining the relative error percentage of each characteristic component in the area characteristics of the candidate area C, and obtaining the sum of the relative errors of all the characteristic components;
the relative error percentage of each characteristic component is smaller than a preset error threshold value T3And the candidate region with the smallest sum of the relative errors of all the feature components is determined as the target region.
Generally, by the above technical solution conceived by the present invention, the following beneficial effects can be obtained:
(1) according to the ground building data fusion composite identification method provided by the invention, the infrared image and the laser image can mutually supplement target area information missing from each other in a data fusion mode, so that a more complete target area and area characteristics with higher confidence coefficient are obtained, and the ground building composite identification method has higher accuracy.
(2) According to the ground building data fusion composite recognition method provided by the invention, through carrying out mathematical morphology preprocessing on the infrared image, the background suppression on the infrared image is realized, the target area is highlighted, and the accuracy of composite recognition on the ground building under a complex background can be effectively improved.
Drawings
FIG. 1 is a visible light image of a ground structure to be identified according to an embodiment of the present invention;
FIG. 2 is a flow chart of a composite identification method for ground building data fusion according to an embodiment of the present invention;
FIG. 3(a) is an infrared image of the ground structure shown in FIG. 1; FIG. 3(b) is a point cloud display of a laser image of the ground structure shown in FIG. 1;
fig. 4(a) is a first background-suppressed image provided by an embodiment of the present invention; fig. 4(b) is a second background-suppressed image provided by the embodiment of the present invention;
fig. 5(a) is a third background-suppressed image provided by an embodiment of the present invention; FIG. 5(b) shows the stretching of FIG. 5 (a);
FIG. 6(a) is an infrared pre-processed image provided by an embodiment of the present invention; FIG. 6(b) shows the stretching of FIG. 5 (a);
FIG. 7 is a graphical background suppression feature provided by an embodiment of the present invention; (a) structural element SE of 0.5 times the size of the target image of the building1(ii) a (b) Structural element SE of 1.1 times of size of building target image2
FIG. 8 illustrates a threshold segmentation result of an IR pre-processed image according to an embodiment of the present invention;
FIG. 9 illustrates an actual target area provided by an embodiment of the present invention;
FIG. 10 is an image edge of an infrared image within a field of view of a laser detector provided by an embodiment of the present invention;
FIG. 11 is an image edge after connecting the fracture edges provided by an embodiment of the present invention;
FIG. 12(a) is a laser range profile provided by an embodiment of the present invention; FIG. 12(b) is a three-dimensional point cloud display of FIG. 12 (a);
fig. 13(a) is a laser image area provided by an embodiment of the present invention; fig. 13(b) is an infrared image region corresponding to fig. 13 (a);
FIG. 14 is a block diagram of candidate regions obtained by data fusion according to an embodiment of the present invention;
fig. 15 shows a composite recognition result of the ground buildings according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The invention provides a composite identification method for data fusion of a ground building, which has the overall thought that: the method comprises the steps of inhibiting a background from an infrared real-time image of a complex background through infrared image preprocessing, highlighting a target area, extracting an infrared image edge in a laser view field range, determining each connected area with uniform gray level in the image according to the extracted infrared image edge, fusing laser and infrared image area extraction results, mutually supplementing target area information missing by each other, namely distinguishing a target area with low contrast and the background in the infrared image by using laser image distance information, supplementing a small number of target areas lost due to noise influence in the laser image according to the infrared image edge information by using the characteristic that surface areas with the same material of an infrared image target have uniform and continuous gray level distribution in the same environment, and thus obtaining more complete candidate areas and area characteristics with higher confidence coefficient and improving identification accuracy.
Fig. 1 shows a visible light image of a scene where a building object is located according to an embodiment of the present invention, and the following describes in detail a composite recognition method for ground building data fusion according to the present invention with reference to the ground building example shown in fig. 1.
The ground building data fusion composite identification method provided by the invention, as shown in fig. 2, comprises the following steps:
(1) respectively obtaining an infrared image and a laser image of a ground building;
specifically, in the embodiment, a coaxial laser infrared dual-mode detector system is used for acquiring one frame of infrared image at a position with a height of 46 meters and a distance of 4km from a building target, and 400 frames of laser images are acquired correspondingly; the laser detector in the coaxial laser infrared dual-mode detector is a laser detector of a Geiger mode APD array;
the size of the acquired infrared image is 640 multiplied by 512 as shown in fig. 3 (a); the three-dimensional point cloud display of the collected 400 frames of laser image data is shown in fig. 3(b), the three-dimensional coordinate is characterized by image line number, line number and range profile distance value, the size of the laser image is 64 multiplied by 64, the distance resolution of the laser image is 0.2m, the center of the laser image corresponds to the position of the center point of the infrared image, and the field of view of the laser detector corresponds to the range of 128 multiplied by 128 pixels of the center of the infrared image;
(2) carrying out mathematical morphology preprocessing and image segmentation on the infrared image in sequence to obtain a plurality of infrared image areas;
in an optional embodiment, in the step (2), the infrared image is preprocessed, which includes:
using structural elements SE having a size smaller than the size of the building target image1Performing morphological opening operation on the infrared image to perform background suppression and reserve a target area, thereby obtaining a first background suppression image; the first background-suppressed image is shown in fig. 4 (a);
using structural elements SE of a size greater than the size of the building target image2Performing morphological open operation on the infrared image to perform background suppression and suppress a target area so as to obtain a second background suppression image; the second background-suppressed image is shown in FIG. 4(b)Shown in the specification;
subtracting the first background suppression image from the second background suppression image to protrude the target area from the background, and setting the gray value of the pixel with the gray value smaller than 0 in the subtraction result as 0, thereby obtaining a third background suppression image; the obtained third background-suppressed image and the stretch display result thereof are shown in fig. 5(a) and 5(b), respectively;
using structural elements SE having a size smaller than the size of the building target image3Performing morphological opening operation on the third background suppression image to remove burrs at the edge of the third background suppression image, so as to obtain an infrared preprocessing image; the infrared pre-processed image and the result of the stretch display thereof are shown in fig. 6(a) and 6(b), respectively;
in the present example, the morphological background suppresses the structural element SE1And SE2As shown in FIGS. 7(a) and 7(b), respectively, the structural element SE1And SE2Respectively 0.5 and 1.1 times the size of the building target image, SE3=SE1
In an alternative embodiment, the image segmentation of the infrared image after the mathematical morphology preprocessing is performed in step (2), which includes:
(21) and (3) gray level combination: performing histogram statistics on the infrared preprocessed image to obtain the number of pixels of each gray level, and combining the gray level of which the number of pixels is less than a threshold value H with the gray level of which the number of nearest pixels is greater than or equal to the threshold value H;
the value of the threshold H is determined according to actual needs, in this embodiment, H is 300;
(22) setting the initial value of the division threshold as the maximum gray level after the gray levels are combined;
(23) carrying out gray level threshold segmentation on the infrared preprocessing image, and converting the infrared preprocessing image into a binary image; marking each interested region in the binary image, and calculating the characteristic quantity of each interested region; the result of the threshold segmentation for correctly segmenting the target region is shown in fig. 8, and the actual target region is shown in fig. 9;
wherein, the characteristic quantity of each interested area comprises: the height, width, rectangularity, center of gravity and area of the region;
(24) modifying the segmentation threshold by using the step length n according to the sequence of the gray levels from large to small, and if the iteration times are less than the maximum iteration times D, turning to the step (23); otherwise, the image segmentation is finished;
the step length n and the maximum iteration number D are preset values, and may be determined according to actual needs, in the embodiment of the present invention, n is preferably 2, and D is preferably 20;
(3) obtaining the image edge of the infrared image in the field range of the laser detector, and obtaining a plurality of edge segmentation areas according to the extracted image edge marks;
in an optional embodiment, step (3) specifically includes:
extracting the image edge of the infrared image in the field range of the laser detector; in the present embodiment, a canny edge extraction algorithm is used to extract the image edge of the infrared image within the field of view of the laser detector, it should be understood that other edge extraction algorithms may also be used; the extracted image edges are shown in FIG. 10;
using structural elements SE4Performing a morphological dilation operation on the extracted image edges to connect fracture edges therein; in the present embodiment, the size of the structural element is L × L; wherein L is a preset value, which can be determined according to actual needs, and is preferably 3 in this embodiment; after connecting the fracture edges, the image edges are shown in FIG. 11;
marking non-edge pixels of the infrared image according to the obtained image edge, thereby obtaining a plurality of edge segmentation areas formed by segmenting the image edge;
(4) sequentially preprocessing and extracting regions of the laser image to obtain a plurality of laser image regions;
in an optional embodiment, step (4) specifically includes:
performing multi-frame accumulation and denoising on the laser image to obtain a complete laser range image of the ground building; the laser range profile and the three-dimensional point cloud thereof obtained after the laser image preprocessing are respectively displayed as shown in fig. 12(a) and 12 (b);
if any two are adjacentPixel point p of1And p2If the difference between the pixel values in the laser range profile is smaller than a preset range threshold T, determining that the pixel point p is1And p2Belong to the same laser image area;
traversing the laser range profile to obtain a plurality of laser image areas; extracting the region distance and the shape feature of each laser image region as region features, wherein the shape features comprise: area, height, width, area, rectangularity; the extracted laser image area is shown in fig. 13(a), and the corresponding infrared image area is shown in fig. 13 (b);
t is a preset value, which can be determined according to actual needs, and in the embodiment of the present invention, T is preferably 60;
(5) performing data fusion on the infrared image area, the edge segmentation area and the laser image area to obtain one or more candidate areas;
in an alternative embodiment, step (5) comprises:
for any one of the edge divided regions Z, the similarity sim between the edge divided region Z and the infrared image region at the same position is calculated respectively1And the similarity sim between the edge division region Z and the laser image region at the same position2(ii) a If sim2>T1Or T is2≤sim2≤T1And sim1>T1Then, the edge segmentation area Z is reserved; otherwise, removing the edge segmentation region Z;
traversing each edge segmentation area to remove part of the edge segmentation area;
removing edge pixels in the image edge which are not adjacent to the reserved edge segmentation area, so as to obtain one or more candidate areas consisting of the reserved edge segmentation area and the adjacent edge pixels; in the present embodiment, the candidate regions obtained by data fusion are shown in fig. 14;
wherein, T1And T2Are all preset similarity thresholds, T1>T2(ii) a In this embodiment, T is a measure for ensuring a high degree of accuracy in the composite identification of the ground structure1Is set to be not less than 0.7 and not more than T1≤0.9,T2Is set to be not less than 0.1 and not more than T20.3 or less, and preferably, T1=0.8,T2=0.2;
In this embodiment, the similarity between the edge segmentation region and the infrared image region at the same position, or the similarity between the edge segmentation region and the laser image region at the same position, is calculated by:
obtaining the pixel area S of the edge segmentation region1And obtaining the pixel area S of the infrared image area or the laser image area at the same position in the divided area2According to the pixel area S1And S2The similarity between the regions is calculated as:
Figure BDA0001928992150000111
(6) extracting the regional characteristics of each candidate region, and matching the extracted regional characteristics with the regional characteristics of the building target, so as to identify the target region where the building target is located from the candidate regions;
in an optional embodiment, step (6) specifically includes:
obtaining the regional characteristics of a building target, and extracting the regional characteristics of each candidate region;
for any candidate area C, with the area characteristics of the building target as reference, respectively obtaining the relative error percentage of each characteristic component in the area characteristics of the candidate area C, and obtaining the sum of the relative errors of all the characteristic components;
the relative error percentage of each characteristic component is smaller than a preset error threshold value T3And determining the candidate region with the minimum sum of the relative errors of all the feature components as a target region; in the present embodiment, the finally identified target region is shown in fig. 15;
wherein the error threshold value T3Can be determined according to actual needs, in the embodiment of the invention, T is preferred3=20%。
By fusing the infrared image area and the edge segmentation area obtained from the infrared image of the ground building and the laser image area obtained from the laser image of the ground building, the target area and the background with low contrast in the infrared image can be distinguished by using the distance information of the laser image, and meanwhile, a small amount of target areas lost due to noise influence in the laser image are supplemented by using the edge information of the infrared image, so that the candidate areas obtained by fusion are more complete, the area characteristics of the candidate areas have higher confidence, and further, the area characteristics of the candidate areas are matched with the area characteristics of the building target to achieve higher accuracy in the composite identification of the ground building.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (4)

1. A composite identification method for ground building data fusion, comprising:
(1) respectively obtaining an infrared image and a laser image of a ground building;
(2) sequentially carrying out mathematical morphology preprocessing and image segmentation on the infrared image to obtain a plurality of infrared image areas;
(3) obtaining the image edge of the infrared image in the field of view of a laser detector, and obtaining a plurality of edge segmentation areas according to the image edge mark;
(4) sequentially preprocessing and extracting the laser image to obtain a plurality of laser image areas;
(5) performing data fusion on the infrared image area, the edge segmentation area and the laser image area to obtain one or more candidate areas;
(6) extracting the regional characteristics of each candidate region, and matching the extracted regional characteristics with the regional characteristics of the building target, so as to identify the target region where the building target is located from the candidate regions;
wherein the laser detector is used for acquiring the laser image;
in the step (2), the mathematical morphology preprocessing is performed on the infrared image, and the mathematical morphology preprocessing comprises the following steps:
using a structural element SE having a size smaller than the size of the building target image1Performing morphological open operation on the infrared image to perform background suppression and reserve the target area, thereby obtaining a first background suppression image;
using a structural element SE having a size larger than the size of the building target image2Performing morphological open operation on the infrared image to perform background suppression and suppress the target area, so as to obtain a second background suppression image;
subtracting the first background suppression image from the second background suppression image to protrude the target area from the background, and setting the gray value of the pixel with the gray value smaller than 0 in the subtraction result as 0, thereby obtaining a third background suppression image;
using a structural element SE having a size smaller than the size of the building target image3Performing morphological opening operation on the third background suppression image to remove burrs at the edge of the third background suppression image, so as to obtain an infrared preprocessing image;
the step (5) comprises:
for any one edge segmentation region Z, respectively calculating the similarity sim between the edge segmentation region Z and the infrared image region at the same position1And the similarity sim between the edge division region Z and the laser image region at the same position2(ii) a If sim2>T1Or T is2≤sim2≤T1And sim1>T1Then the edge segmentation region Z is reserved; otherwise, removing the edge segmentation region Z; t is1And T2Are all preset similarity thresholds, T1>T2
Traversing each edge segmentation area to remove part of the edge segmentation area;
and removing edge pixels which are not adjacent to the reserved edge segmentation area in the image edge so as to obtain one or more candidate areas consisting of the reserved edge segmentation area and the adjacent edge pixels.
2. A ground building data fused composite identification method as claimed in claim 1, wherein said step (3) comprises:
extracting the image edge of the infrared image in the field of view of the laser detector, and utilizing a structural element SE4Performing a morphological dilation operation on the extracted image edges to connect fracture edges therein;
and marking non-edge pixels of the infrared image according to the image edge, thereby obtaining a plurality of edge segmentation areas formed by segmenting the image edge.
3. The ground building data fusion composite identification method according to claim 1, wherein the similarity between the edge segmentation area and the infrared image area at the same position or the similarity between the edge segmentation area and the laser image area at the same position is calculated by:
obtaining the pixel area S of the edge segmentation region1And obtaining the pixel area S of the infrared image area or the laser image area at the same position in the divided area2According to the pixel area S1And S2The similarity between the regions is calculated as:
Figure FDA0002636952610000031
4. a ground building data fused composite identification method as claimed in claim 1, wherein said step (6) comprises:
obtaining the regional characteristics of the building target, and extracting the regional characteristics of each candidate region;
for any candidate area C, with the area characteristics of the building target as a reference, respectively obtaining the relative error percentage of each characteristic component in the area characteristics of the candidate area C, and obtaining the sum of the relative errors of all the characteristic components;
the relative error percentage of each characteristic component is smaller than a preset error threshold value T3And identifying the candidate region with the smallest sum of the relative errors of all the feature components as the target region.
CN201811630977.6A 2018-12-29 2018-12-29 Composite identification method for ground building data fusion Active CN109784229B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811630977.6A CN109784229B (en) 2018-12-29 2018-12-29 Composite identification method for ground building data fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811630977.6A CN109784229B (en) 2018-12-29 2018-12-29 Composite identification method for ground building data fusion

Publications (2)

Publication Number Publication Date
CN109784229A CN109784229A (en) 2019-05-21
CN109784229B true CN109784229B (en) 2020-10-30

Family

ID=66498843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811630977.6A Active CN109784229B (en) 2018-12-29 2018-12-29 Composite identification method for ground building data fusion

Country Status (1)

Country Link
CN (1) CN109784229B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444839B (en) * 2020-03-26 2023-09-08 北京经纬恒润科技股份有限公司 Target detection method and system based on laser radar
CN111680537A (en) * 2020-03-31 2020-09-18 上海航天控制技术研究所 Target detection method and system based on laser infrared compounding

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000275341A (en) * 1999-03-24 2000-10-06 Nec Corp Guidance apparatus
CN102254158A (en) * 2011-07-07 2011-11-23 中国科学院上海技术物理研究所 Kalman filtering-based infrared target real-time track detection method
CN103745476A (en) * 2014-01-22 2014-04-23 湘潭大学 Mobile phone clapboard sand detection method based on line scanning local peak analysis
CN104536009A (en) * 2014-12-30 2015-04-22 华中科技大学 Laser infrared composite ground building recognition and navigation method
CN105631799A (en) * 2015-12-18 2016-06-01 华中科技大学 Moving platform laser infrared fusion detection and recognition system
CN108509918A (en) * 2018-04-03 2018-09-07 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108010085B (en) * 2017-11-30 2019-12-31 西南科技大学 Target identification method based on binocular visible light camera and thermal infrared camera

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000275341A (en) * 1999-03-24 2000-10-06 Nec Corp Guidance apparatus
CN102254158A (en) * 2011-07-07 2011-11-23 中国科学院上海技术物理研究所 Kalman filtering-based infrared target real-time track detection method
CN103745476A (en) * 2014-01-22 2014-04-23 湘潭大学 Mobile phone clapboard sand detection method based on line scanning local peak analysis
CN104536009A (en) * 2014-12-30 2015-04-22 华中科技大学 Laser infrared composite ground building recognition and navigation method
CN105631799A (en) * 2015-12-18 2016-06-01 华中科技大学 Moving platform laser infrared fusion detection and recognition system
CN108509918A (en) * 2018-04-03 2018-09-07 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Camera-laser fusion sensor system and environmental recognition for humanoids in disaster scenarios;Inho Lee et al.;《Journal of Mechanical Science and Technology》;20170202;第31卷(第6期);第2997-3003页 *
Edge Extraction by Merging the 3D Point Cloud and 2D Image Data;Ying Wang et al.;《Proceedings of the 10th International Conference & Expo on Emerging Technologies for a Smarter World》;20131022;第1-9页 *
Mapping Infrared Data on Terrestrial Laser Scanning 3D Models of Buildings;Mario Ivan Alba et al.;《Remote Sensing》;20110825;第1847-1870页 *
基于数学形态学的海面红外舰船目标检测算法研究;李积俊;《中国优秀硕士学位论文全文数据库 信息科技辑》;20130215;正文第三章 *
数学形态学在红外图像预处理中的应用;孙亮;《中国优秀硕士学位论文全文数据库 信息科技辑》;20051215;正文第3章 *

Also Published As

Publication number Publication date
CN109784229A (en) 2019-05-21

Similar Documents

Publication Publication Date Title
CN109559324B (en) Target contour detection method in linear array image
CN103714541B (en) Method for identifying and positioning building through mountain body contour area constraint
EP2811423B1 (en) Method and apparatus for detecting target
US8538082B2 (en) System and method for detecting and tracking an object of interest in spatio-temporal space
EP2426642B1 (en) Method, device and system for motion detection
US9025875B2 (en) People counting device, people counting method and people counting program
CN110097093A (en) A kind of heterologous accurate matching of image method
CN109086724B (en) Accelerated human face detection method and storage medium
WO2016106955A1 (en) Laser infrared composite ground building recognition and navigation method
CN104978567B (en) Vehicle checking method based on scene classification
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
Lian et al. A novel method on moving-objects detection based on background subtraction and three frames differencing
EP3593322B1 (en) Method of detecting moving objects from a temporal sequence of images
CN112017243B (en) Medium visibility recognition method
CN114782499A (en) Image static area extraction method and device based on optical flow and view geometric constraint
US20210350705A1 (en) Deep-learning-based driving assistance system and method thereof
CN109784229B (en) Composite identification method for ground building data fusion
Zhang et al. Multiple Saliency Features Based Automatic Road Extraction from High‐Resolution Multispectral Satellite Images
CN108416798A (en) A kind of vehicle distances method of estimation based on light stream
CN110675442B (en) Local stereo matching method and system combined with target recognition technology
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN111161308A (en) Dual-band fusion target extraction method based on key point matching
CN112016558B (en) Medium visibility recognition method based on image quality
EP4287137A1 (en) Method, device, equipment, storage media and system for detecting drivable space of road

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant