CN114022753B - Significance and edge analysis-based empty small target detection algorithm - Google Patents
Significance and edge analysis-based empty small target detection algorithm Download PDFInfo
- Publication number
- CN114022753B CN114022753B CN202111352007.6A CN202111352007A CN114022753B CN 114022753 B CN114022753 B CN 114022753B CN 202111352007 A CN202111352007 A CN 202111352007A CN 114022753 B CN114022753 B CN 114022753B
- Authority
- CN
- China
- Prior art keywords
- target
- image
- unmanned aerial
- aerial vehicle
- scale
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims description 11
- 238000004458 analytical method Methods 0.000 title claims description 6
- 238000012216 screening Methods 0.000 claims abstract description 10
- 238000000034 method Methods 0.000 claims description 16
- 230000011218 segmentation Effects 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 230000002401 inhibitory effect Effects 0.000 claims description 3
- 238000004891 communication Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 claims description 2
- 238000001914 filtration Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims 1
- 230000007547 defect Effects 0.000 abstract description 5
- 238000013461 design Methods 0.000 description 4
- 238000003708 edge detection Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Firstly, carrying out multi-scale Gaussian blur on an input image, wherein the large-scale fuzzy target has the characteristics of a large amount of neighborhood information according to the characteristics of the small-scale fuzzy target, and extracting the characteristics of the remarkable target; 2. binarizing the remarkable target features according to the global feature distribution condition; 3. expanding the binarization result, and connecting stray features generated by isolated dead pixels, noise and multi-texture background; 4. size screening is carried out on the expansion result, and the target position of the unmanned aerial vehicle is given by combining the target characteristic intensity and the height information; through the steps, the algorithm filters interference caused by isolated dead points, noise points and complex backgrounds, and has strong robustness. The system changes the defect that the conventional airport needs to manually check hidden dangers of the unmanned aerial vehicle and needs people to participate in real time, and avoids the unmanned aerial vehicle from invading an airport event.
Description
Technical Field
The invention designs an air-conditioner small target detection algorithm based on saliency and edge analysis, and belongs to the technical field of video processing.
Background
With the development of low-cost and miniaturized unmanned aerial vehicle industry, unmanned aerial vehicles rapidly enter the civil consumption field, and a unmanned aerial vehicle with excellent performance can be purchased after several thousands of yuan. However, currently, unlicensed unmanned aerial vehicles are used in large quantities, which brings hidden danger to public safety. How to automatically discover an airport accessory unmanned aerial vehicle becomes an important problem of current public security.
The detection algorithm for the small targets is designed by the method, integrates a plurality of strategies such as saliency and edge analysis, has the automatic detection capability for the small targets and the low-contrast targets, changes the defect that people need to participate in real time when the unmanned aerial vehicle hidden danger is manually checked in the conventional airport, and avoids the unmanned aerial vehicle from invading an airport event.
Disclosure of Invention
Object of the Invention
The invention aims to design an empty small target detection algorithm, the system integrates a plurality of strategies such as saliency and edge analysis, has the automatic detection capability for small targets and low-contrast targets, overcomes the defect that the conventional airport needs to manually check hidden dangers of unmanned aerial vehicles and needs people to participate in real time, and avoids the unmanned aerial vehicles from invading airport events.
Technical proposal
The invention designs an empty small target detection algorithm based on saliency and edge detection, which comprises the following processing flows:
firstly, carrying out multi-scale Gaussian blur on an input image, obviously obtaining features of a large-scale fuzzy target with a large amount of neighborhood information according to features of a small-scale fuzzy target, and extracting significant target features
Step two, binarizing the remarkable target features according to the global feature distribution condition
Expanding the binarization result, and connecting stray features generated by isolated dead pixels, noise and multi-texture background.
And step four, size screening is carried out on the expansion result, and the target position of the unmanned aerial vehicle is given by combining the target characteristic intensity and the height information.
Through the steps, the saliency and the edge information of the targets in the image are combined, and the automatic detection of the unmanned aerial vehicle targets in the image is realized. The system expands the isolated points of the image features, combines the multi-texture background objects, and filters according to the target size, so that the interference of clusters and buildings in common complex scenes can be effectively eliminated. The parameters in the method are conveniently adjusted by reserving the interfaces, the imaging method is suitable for imaging of all focal length lenses, and the flexibility of the system is improved. The algorithm changes the defect that conventionally, the hidden danger of the unmanned aerial vehicle needs to be manually checked at the airport and people need to participate in real time, and avoids the unmanned aerial vehicle from invading an airport event.
The method comprises the following steps of: the image is subjected to a smaller range of Gaussian blur for extracting detail texture information, and the original image is subjected to a larger range of mean blur for extracting a large range of neighborhood information. And taking pixel-level Euclidean distance for the two, and extracting the target significant features.
The method comprises the following steps of: and (3) carrying out histogram statistics on the significant feature map obtained in the step one, taking the maximum value of the inter-class variance as a segmentation threshold, carrying out binary segmentation on the feature map, and inhibiting the interference of isolated dead points and noise points in the feature map.
Wherein, the "expansion of the binarized result" described in the "step three" is performed as follows: considering that the binary image obtained in the step two contains interference caused by complex background objects such as clusters, buildings and the like, the binary image is expanded in a proper range, and the spurious features of complex multi-texture backgrounds are linked.
The size screening is performed in the step four, and the target position of the unmanned aerial vehicle is given by combining the target characteristic intensity and the height information, and the method comprises the following steps: the expansion result obtained in the step three is used for eliminating the interference of isolated dead spots and noise spots and connecting the stray characteristics of the complex background together. And step four, searching the connected domain of the obtained expansion map, and estimating the size of the target predicted size according to the current lens focal length so as to guide the size screening of the connected domain. Then scoring the characteristic intensity of the selected connected domain in the step one, carrying out score normalization by combining the height information, and outputting a target return value of the unmanned aerial vehicle if the highest score connected domain is larger than a preset value; otherwise, no unmanned aerial vehicle target exists in the current image.
Advantages of the invention
The invention has the advantages that the system can filter interference caused by isolated dead points, noise points and complex background, and has strong robustness; the system is reserved with an interface, the size of an expected target can be adjusted according to the focal length of the camera lens, and the adaptability is high. The algorithm changes the defect that conventionally, the hidden danger of the unmanned aerial vehicle needs to be manually checked at the airport and people need to participate in real time, and avoids the unmanned aerial vehicle from invading an airport event.
Drawings
Fig. 1 is a flow chart of the operation of the unmanned aerial vehicle automatic monitoring system.
Detailed Description
The invention designs an empty small target detection algorithm integrating significance and edge detection, the processing flow of which is shown in figure 1, and the specific processing steps are as follows:
firstly, carrying out multi-scale Gaussian blur on an input image, obviously obtaining features of a large-scale fuzzy target with a large amount of neighborhood information according to features of a small-scale fuzzy target, and extracting significant target features
Step two, binarizing the remarkable target features according to the global feature distribution condition
Expanding the binarization result, and connecting stray features generated by isolated dead pixels, noise and multi-texture background.
And step four, size screening is carried out on the expansion result, and the target position of the unmanned aerial vehicle is given by combining the target characteristic intensity and the height information.
The first method comprises the following steps:
The image is subjected to a smaller range of Gaussian blur for extracting detail texture information, and the original image is subjected to a larger range of mean blur for extracting a large range of neighborhood information. And taking pixel-level Euclidean distance for the two, and extracting the target significant features. The pixel-level Euclidean distance is calculated as follows:
(1) Firstly, an input image is acquired, and is set as I in, then I in (x, y) is the pixel value of a position (x, y) image, small-scale Gaussian blur transformation is set as f S, and large-scale mean blur transformation is set as f L.
(2) Calculating an image with blurred size scale: f S(Iin (x, y)) and f L(Iin (x, y)).
(3) The image saliency map is calculated using the following formula:
Iout(x,y)=(fs(Iin(x,y))-fL(Iin(x,y)))2
(4) The salient feature map is max-min normalized such that the output image range is [0,255].
The second method comprises the following steps:
and (3) carrying out histogram statistics on the significant feature map obtained in the step one, taking the maximum value of the inter-class variance as a segmentation threshold, carrying out binary segmentation on the feature map, and inhibiting the interference of isolated dead points and noise points in the feature map. The method for calculating the inter-class variance is as follows:
(1) A gray value accumulated value p k of gray level k (0 to 255), an accumulated average value m k and an image global average value m G are calculated:
(2) Calculating the inter-class variance of the gray level k:
(3) Taking the maximum value of the inter-class variance as a segmentation threshold eta:
the feature map is segmented by using a segmentation threshold η, and binarized into a binarized map composed of 0 and 255.
The third method comprises the following steps:
considering that the binary image obtained in the step two contains interference caused by complex background objects such as clusters, buildings and the like, the binary image is expanded in a proper range, and the spurious features of complex multi-texture backgrounds are linked.
The method of the fourth step is as follows:
The expansion result obtained in the step three is used for eliminating the interference of isolated dead spots and noise spots and connecting the stray characteristics of the complex background together. And step four, searching the connected domain of the obtained expansion map, and estimating the size of the target predicted size according to the current lens focal length so as to guide the size screening of the connected domain. Then scoring the characteristic intensity of the selected connected domain in the step one, carrying out score normalization by combining the height information, and outputting a target return value of the unmanned aerial vehicle if the highest score connected domain is larger than a preset value; otherwise, no unmanned aerial vehicle target exists in the current image. The method comprises the following specific steps:
(1) Let I in be the expansion map, traverse the connected domain in the image:
C=Contours(Iin)
(2) Traversing the obtained expansion graph, and performing size filtering:
C′={Area(Ck)∈[Amin,Amax]|Ck∈C}
(3) Setting the characteristic diagram obtained in the step one as I out, and scoring the communication domain after screening according to the characteristic intensity:
(4) Normalizing the score in combination with the height information:
score′k=scorek×log(y′k)
and judging the normalized score by a threshold value to obtain whether the unmanned aerial vehicle exists in the field of view or not and the position information of the unmanned aerial vehicle.
Claims (1)
1. The empty small target detection algorithm based on significance and edge analysis is characterized by comprising the following processing flows:
step one, carrying out multi-scale Gaussian blur on an input image, extracting significant target characteristics according to the characteristics of a small-scale blurred target which is obvious and has a large amount of characteristics of neighborhood information,
Step two, binarizing the remarkable target features according to the global feature distribution condition,
Expanding the binarization result, connecting stray features generated by isolated dead pixels, noise and multi-texture background,
Step four, size screening is carried out on the expansion result, the target position of the unmanned aerial vehicle is given by combining the target characteristic intensity and the height information,
The first method comprises the following steps: the method comprises the steps of carrying out Gaussian blur in a smaller range on an image to extract detail texture information, simultaneously using mean blur in a larger range on an original image to extract large-range neighborhood information, taking pixel-level Euclidean distance for the two images, extracting target significant features, and calculating the pixel-level Euclidean distance as follows:
(1) Firstly, an input image is acquired, which is I in, then I in (x, y) is the pixel value of the position (x, y) image, small-scale Gaussian blur transformation is f S, large-scale mean blur transformation is f L,
(2) Calculating an image with blurred size scale: f S(Iin(x,y)),fL(Iin (x, y)),
(3) The image saliency map is calculated using the following formula:
Iout(x,y)=(fs(Iin(x,y))-fL(Iin(x,y)))2
(4) Maximum and minimum normalization is performed on the salient feature map, so that the output image range is [0,255],
The second method comprises the following steps:
carrying out histogram statistics on the significant feature map obtained in the step one, taking the maximum value of the inter-class variance as a segmentation threshold, carrying out binary segmentation on the feature map, and inhibiting the interference of isolated dead points and noise points in the feature map, wherein the inter-class variance calculation method comprises the following steps:
(1) A gray value accumulated value p k of the gray level k, an accumulated average value m k and an image global average value m G are calculated:
(2) Calculating the inter-class variance of the gray level k:
(3) Taking the maximum value of the inter-class variance as a segmentation threshold eta:
The feature map is segmented by using a segmentation threshold eta, the feature map is binarized into a binarized map composed of 0 and 255,
The third method comprises the following steps:
considering that the binary image obtained in the step two contains interference caused by complex background objects, the binary image is expanded in a proper range, stray features of complex multi-texture backgrounds are linked,
The method of the fourth step is as follows: the expansion result obtained in the step three is used for eliminating the interference of isolated dead spots and noise spots and connecting the stray features of complex backgrounds, so that the step four is used for searching a connected domain of the obtained expansion map, estimating the size of a target predicted size according to the focal length of a current lens, guiding the size screening of the connected domain, scoring the characteristic intensity of the screened connected domain in the step one, carrying out fractional normalization by combining with the height information, and outputting the target return value of the unmanned plane if the highest fractional connected domain is larger than a preset value; otherwise, no unmanned aerial vehicle target exists in the current image, and the specific steps are as follows:
(1) Let I in be the expansion map, traverse the connected domain in the image:
CContours(Iin)
(2) Traversing the obtained expansion graph, and performing size filtering:
C′={Area(Ck)∈[Amin,Amax]|Ck∈C}
(3) Setting the characteristic diagram obtained in the step one as I out, and scoring the communication domain after screening according to the characteristic intensity:
(4) Normalizing the score in combination with the height information:
score′k=scorek×log(y′k)
and judging the normalized score by a threshold value to obtain whether the unmanned aerial vehicle exists in the field of view or not and the position information of the unmanned aerial vehicle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111352007.6A CN114022753B (en) | 2021-11-16 | 2021-11-16 | Significance and edge analysis-based empty small target detection algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111352007.6A CN114022753B (en) | 2021-11-16 | 2021-11-16 | Significance and edge analysis-based empty small target detection algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114022753A CN114022753A (en) | 2022-02-08 |
CN114022753B true CN114022753B (en) | 2024-05-14 |
Family
ID=80064313
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111352007.6A Active CN114022753B (en) | 2021-11-16 | 2021-11-16 | Significance and edge analysis-based empty small target detection algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114022753B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115294478B (en) * | 2022-07-28 | 2024-04-05 | 北京航空航天大学 | Aerial unmanned aerial vehicle target detection method applied to modern photoelectric platform |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103996209A (en) * | 2014-05-21 | 2014-08-20 | 北京航空航天大学 | Infrared vessel object segmentation method based on salient region detection |
WO2016076449A1 (en) * | 2014-11-11 | 2016-05-19 | Movon Corporation | Method and system for detecting an approaching obstacle based on image recognition |
CN109325935A (en) * | 2018-07-24 | 2019-02-12 | 国网浙江省电力有限公司杭州供电公司 | A kind of transmission line faultlocating method based on unmanned plane image |
WO2020211522A1 (en) * | 2019-04-15 | 2020-10-22 | 京东方科技集团股份有限公司 | Method and device for detecting salient area of image |
-
2021
- 2021-11-16 CN CN202111352007.6A patent/CN114022753B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103996209A (en) * | 2014-05-21 | 2014-08-20 | 北京航空航天大学 | Infrared vessel object segmentation method based on salient region detection |
WO2016076449A1 (en) * | 2014-11-11 | 2016-05-19 | Movon Corporation | Method and system for detecting an approaching obstacle based on image recognition |
CN109325935A (en) * | 2018-07-24 | 2019-02-12 | 国网浙江省电力有限公司杭州供电公司 | A kind of transmission line faultlocating method based on unmanned plane image |
WO2020211522A1 (en) * | 2019-04-15 | 2020-10-22 | 京东方科技集团股份有限公司 | Method and device for detecting salient area of image |
Also Published As
Publication number | Publication date |
---|---|
CN114022753A (en) | 2022-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Al-Ghaili et al. | Vertical-edge-based car-license-plate detection method | |
CN107665324B (en) | Image identification method and terminal | |
Gao et al. | Car license plates detection from complex scene | |
Paunwala et al. | A novel multiple license plate extraction technique for complex background in Indian traffic conditions | |
Li et al. | Research on vehicle license plate location based on neural networks | |
US8712114B2 (en) | Elegant solutions for fingerprint image enhancement | |
CN108563979B (en) | Method for judging rice blast disease conditions based on aerial farmland images | |
CN110163109B (en) | Lane line marking method and device | |
Yang et al. | Moving cast shadow detection by exploiting multiple cues | |
CN111027544B (en) | MSER license plate positioning method and system based on visual saliency detection | |
Babbar et al. | A new approach for vehicle number plate detection | |
CN111476804A (en) | Method, device and equipment for efficiently segmenting carrier roller image and storage medium | |
CN110660065A (en) | Infrared fault detection and identification algorithm | |
CN114022753B (en) | Significance and edge analysis-based empty small target detection algorithm | |
Nejati et al. | License plate recognition based on edge histogram analysis and classifier ensemble | |
CN113657250A (en) | Flame detection method and system based on monitoring video | |
Raikar et al. | Automatic building detection from satellite images using internal gray variance and digital surface model | |
CN113450373A (en) | Optical live image-based real-time discrimination method for characteristic events in carrier rocket flight process | |
CN111695374B (en) | Segmentation method, system, medium and device for zebra stripes in monitoring view angles | |
CN112329572B (en) | Rapid static living body detection method and device based on frame and flash point | |
Rao et al. | Authentication and Parking of the Vehicle Using License Plate Detection and Recognition in the Campus | |
CN109859200B (en) | Low-altitude slow-speed unmanned aerial vehicle rapid detection method based on background analysis | |
Aqel et al. | Traffic video surveillance: Background modeling and shadow elimination | |
Lo et al. | Shadow detection by integrating multiple features | |
Rao et al. | VNPDR Employed in the Computer Vision Realm for Vehicle Authentication and Parking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |