CN111709885A - Infrared weak and small target enhancement method based on region of interest and image mark - Google Patents

Infrared weak and small target enhancement method based on region of interest and image mark Download PDF

Info

Publication number
CN111709885A
CN111709885A CN202010354472.2A CN202010354472A CN111709885A CN 111709885 A CN111709885 A CN 111709885A CN 202010354472 A CN202010354472 A CN 202010354472A CN 111709885 A CN111709885 A CN 111709885A
Authority
CN
China
Prior art keywords
image
region
area
target
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010354472.2A
Other languages
Chinese (zh)
Other versions
CN111709885B (en
Inventor
李武周
邵铭
胡启立
程相正
康华超
刘伟
樊仁杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unit 63891 Of Pla
Original Assignee
Unit 63891 Of Pla
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unit 63891 Of Pla filed Critical Unit 63891 Of Pla
Priority to CN202010354472.2A priority Critical patent/CN111709885B/en
Publication of CN111709885A publication Critical patent/CN111709885A/en
Application granted granted Critical
Publication of CN111709885B publication Critical patent/CN111709885B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an infrared small and weak target enhancement method based on an interested area and an image mark, which comprises the following steps: preprocessing the collected infrared image containing the infrared dim target; screening a local maximum value region containing all targets from the preprocessed image under the constraint conditions of 4 constraints including depth limitation, area limitation, shape limitation and area expansion speed limitation; expanding each screened local maximum value region outwards for a certain distance to form a region to be segmented; obtaining a segmentation threshold value of each region to be segmented, and segmenting the region to be segmented; creating a mark matrix with the same size as the original image; and mapping the background area and all the target areas in the original image by using the mark matrix of the image by adopting different mapping methods and parameters respectively, wherein the obtained image is the final enhancement result. The method has a good target enhancement effect on the infrared dim target, and has important significance in the field of infrared dim target detection.

Description

Infrared weak and small target enhancement method based on region of interest and image mark
Technical Field
The invention belongs to the technical field of infrared image processing, relates to an infrared target detection method, and particularly relates to an infrared weak and small target enhancement method based on an interested region and an image marker, which is used for detecting weak and small targets such as airplanes, missiles, satellites and the like from infrared images of an infrared band.
Background
The infrared weak and small target detection is one of key technologies of an infrared search and tracking system, is always a research hotspot in the field of infrared identification, can effectively improve the monitoring range, and plays an important role in the fields of navigation, air defense, safety monitoring and the like. Two difficulties exist in the detection of infrared small and weak targets: (1) the infrared sensor has a long action distance, so that the size of a target on an image displayed by the infrared sensor is reduced, the target has only a few to dozens of pixels, and the small target has no obvious texture and shape characteristics; (2) due to the influence of background target radiation and an image sensor, the infrared image has serious random noise and a large amount of clutter, small targets are often submerged in a complex background, and the signal-to-noise ratio of the image is low, so that the detection of the weak and small targets in the complex background becomes very difficult. In practical engineering applications, the infrared small target detection algorithms can be simply divided into two main categories: the method comprises a Track Before Detection (TBD) algorithm and a Track Before Detection (DBT) algorithm, wherein the TBD algorithm uses multi-frame images for accumulation to Detect weak targets, the DBT algorithm locates the targets in the first frame images with the targets, and then uses the space-time consistency of the targets in continuous images to estimate the positions of the targets by using a tracking technology.
However, the existing infrared weak and small target detection algorithm still has certain defects and shortcomings, such as missing detection, false detection, late detection time and the like, so that manual interpretation is still needed sometimes in the detection and identification process so as to quickly judge the authenticity of a suspected target. But the gray scale range of the data collected by the infrared detector can reach 214Or even 216Mapping to 2 only by compression8Gray scale range can be displayed, and the infrared weak and small target has the characteristics of small imaging area and small gray scale difference with the background and is mapped to 28The gray scale difference in the gray scale range of (2) is usually less than 2, and the target is difficult to identify by naked eyes, which brings difficulty to manual interpretation.
Disclosure of Invention
Aiming at solving the defects in the prior art and solving the problem that the infrared dim and small target is difficult to identify in the manual interpretation process, the invention provides the infrared dim and small target enhancement method based on the interesting region and the image mark by utilizing the characteristics that the imaging area of the infrared target is small and the interesting region of people is only a very small part of the image.
In order to achieve the purpose, the invention adopts the following technical scheme:
an infrared weak and small target enhancement method based on a region of interest and image marking comprises the following steps:
s1, image preprocessing: preprocessing the collected infrared image containing the infrared dim target by using a maximum median filtering algorithm, and filtering random noise of the infrared image;
s2, screening suspected targets: under the restriction of 4 constraints of depth limitation, area limitation, shape limitation and area expansion speed limitation, screening out a local maximum value region containing all targets from the image obtained in the step S1;
s3, dividing the regions to be divided: expanding each screened local maximum value region in the step S2 outwards by a certain distance, wherein the formed region is the finally determined region of interest, and the region is also the region to be segmented to be searched;
s4, image segmentation: obtaining a segmentation threshold value of each region to be segmented obtained in the step S3 by using a maximum inter-class variance method, and segmenting the region to be segmented obtained in the step S3;
s5, creating an image tag matrix: creating a marking matrix with the same size as the original image in the step S1 according to the segmentation result in the step S4, marking and storing the segmentation result, and marking the positions corresponding to different targets in the marking matrix and the image as different marks;
s6, mapping the image labels one by one: controlling the object at 2 with the object mapping scale parameter using the image tag matrix in step S58And respectively adopting different mapping methods and parameters to map the background area and all target areas in the original image within the range occupied by the gray level, and obtaining the final enhancement result after all the areas in the image are completely mapped.
Further, in step S1, the image is preprocessed by using a maximum median filtering algorithm, where the maximum median filtering algorithm is calculated according to the following formula:
Fmax-median(i,j)=max{mid1,mid2,mid3,mid4} (1)
wherein the content of the first and second substances,
Figure BDA0002473001980000031
in the above formula, Fmax-median(i, j) is the maximum median at coordinate (i, j)Filtering result, mid1,mid2,mid3,mid4The median of the pixels in the I direction, the j direction, the main diagonal direction and the sub diagonal direction in the processing area respectively, mean { } is a median taking function, I (I, j) is a gray value at the position (I, j) of the original image, m is a scale parameter of a filtering window, and an odd number which is not less than 3 is generally taken.
Further, in the step S2, the step of screening the suspected object includes the steps of: firstly, traverse the whole image to extract all maximum values pkAnd position Dk(0) Then, the maximum value p is addedkGradually decrease to make it located in the location area Dk(0) Gradually expand outward to form a region Dk(1),Dk(2),…Dk(h),…,Dk(n) wherein Dk(h) Showing that the kth maximum reduces the h-level gray scale and then expands outwards to form a single connected domain, using sk(h) Indicating the region Dk(h) The sum of all the pixel numbers in the area D is represented by a and bk(h) The major axis and the minor axis of the minimum ellipse meet the following four constraints, namely the area D where the minimum h is locatedk(h) I.e. the area where all possible suspected objects are located:
Figure BDA0002473001980000032
wherein h islimFor depth constraint limits, slimFor area constraint limits, Δ slimFor area expansion rate constraint limit,/limIs the shape constraint limit.
Further, in step S3, the local maximum region is expanded outward by a distance generally selected such that the expanded region area is the region D found in step S2k(h) 2-5 times of the area.
Further, in the above step S4, the maximum inter-class variance method is used to obtain the segmentation threshold t for each region to be segmentedkMaximizing the between-class variance
Figure BDA0002473001980000041
Definition ofComprises the following steps:
Figure BDA0002473001980000042
wherein the content of the first and second substances,
Figure BDA0002473001980000043
Figure BDA0002473001980000044
Figure BDA0002473001980000045
Figure BDA0002473001980000046
Figure BDA0002473001980000047
Figure BDA0002473001980000048
in the above formula, nqIs the kth region to be segmented ZkThe number of pixels of the q-th gray scale among all L-level gray scales, n being ZkThe total number of pixels in;
after the inter-class variance of all possible gray levels of each region to be divided is obtained, the following formula is used for obtaining ZkIs divided by a division threshold tk
Figure BDA0002473001980000049
The k-th target area divided is expressed as { (I, j) | I (I, j) ≧ tkAnd (i, j) ∈ Zk}。
Further, in the above step S5, the specific operation of creating the image tag matrix is: creating a marking matrix M of the same size as the original image I for marking and storing the segmentation results such that:
Figure BDA00024730019800000410
wherein I (I, j) is the gray value at position (I, j) in the original image I, tkThe k-th target division threshold value indicates the region belonging to the target k in the original image I
Tk={(i,j)|M(i,j)=k}
The region belonging to the background is denoted T0={(i,j)|M(i,j)=0}。
Further, in step S6, the specific operation of mapping the image labels one by one is to map the image labels with the target mapping scale parameter p (p ∈ [0,1 ]]) Control targets in the entirety of 28The range occupied by the gray level maps the background area and the target area in the original image respectively to make the gray level value of the background area map to
Figure BDA0002473001980000055
Within range, grey value mapping of target area
Figure BDA0002473001980000054
Within the range of the symbol
Figure BDA0002473001980000056
The expression is rounded downwards, the parameter p is flexibly selected and is generally selected within the range of 0.5-0.9, so that the target has a wider gray scale range, and the background has a narrower gray scale range, thereby achieving the purposes of enhancing the target and weakening the background.
Further, in step S6, a linear mapping algorithm is applied to the background region, and the mapping method is as follows:
Figure BDA0002473001980000051
wherein I' (I, j) is the gray value at the mapped position (I, j),
Figure BDA0002473001980000052
is (i, j) ∈ T in the original image0The gray value of the portion.
Further, in step S6, the target area is mapped by using a histogram equalization algorithm; the mapping algorithm can be expressed as:
Figure BDA0002473001980000053
wherein n isq(i, j) is (i, j) ∈ T in the original imagekThe number of pixels of the q-th gray scale with gray scale value I (I, j) in all L-level gray scales is partially, and n is (I, j) ∈ T in the original imagekThe total number of partial pixels.
Due to the adoption of the technical scheme, the invention has the following advantages:
the infrared small and weak target enhancement method based on the interesting region and the image mark has simple logic and easy realization, can enhance target details, suppress background brightness, remarkably improve the detail characteristics of the target, has a good target enhancement effect on the infrared small and weak target, can provide convenience for manual interpretation and identification of the infrared small and weak target, and has important significance in the field of infrared small and weak target detection.
Drawings
FIG. 1 is a flow chart of the infrared small and weak target enhancement method based on the region of interest and image marking according to the present invention;
FIG. 2 is an original infrared image containing small infrared targets in accordance with one embodiment of the present invention;
FIG. 3 is a view showing a maximum region and a region of interest extracted by expanding the maximum region and the maximum region screened out by using the constraint condition from the original infrared image in FIG. 2 by a certain distance;
FIG. 4 is a mapping of original infrared image to 2 by using the infrared small target enhancement method based on region of interest and image marking according to the present invention8The resulting image in grayscale.
Detailed Description
The technical solution of the present invention will be further described in detail with reference to the accompanying drawings and examples.
As shown in fig. 1 to 4, a method for enhancing an infrared weak and small target based on a region of interest and an image mark includes the following specific steps:
s1, image preprocessing: preprocessing the collected infrared image containing the infrared dim target by using a maximum median filtering algorithm, and filtering random noise of the infrared image; the maximum median filtering algorithm is calculated as follows:
Fmax-median(i,j)=max{mid1,mid2,mid3,mid4} (1)
wherein the content of the first and second substances,
Figure BDA0002473001980000061
in the above formula, Fmax-median(i, j) is the maximum median filtering result at coordinate (i, j), mid1,mid2,mid3,mid4The median of pixels in the I direction (horizontal), the j direction (vertical), the main diagonal (left 45 degrees) direction and the secondary diagonal (right 45 degrees) direction in the processing area respectively, mean { } is a median taking function, I (I, j) is a gray value at the position (I, j) of the original image, m is a scale parameter of a filtering window, and m is 5.
S2, screening suspected targets: firstly, traverse the whole image to extract all maximum values pkAnd position Dk(0) Then, the maximum value p is addedkGradually decrease to make it located in the location area Dk(0) Gradually expand outward to form a region Dk(1),Dk(2),…Dk(h),…,Dk(n) wherein Dk(h) Showing that the kth maximum reduces the h-level gray scale and then expands outwards to form a single connected domain, using sk(h) Indicating the region Dk(h) The sum of all the pixel numbers in the area D is represented by a and bk(h) The major axis and the minor axis of the minimum ellipse meet the following four constraints, namely the area D where the minimum h is locatedk(h) I.e. the area where all possible suspected objects are located:
Figure BDA0002473001980000071
wherein h islim、slim、Δslim、llimThe empirical threshold value here is 5, 50, 20, respectively, that is
Figure BDA0002473001980000072
The maximum value region screened out is a yellow portion as shown in fig. 3;
s3, dividing the regions to be divided: in order to retain the target details as much as possible, each of the local maximum value regions D satisfying the constraint condition selected in step S2 is subjected tok(h) Expanding the distance of 6 pixels outwards to form a finally determined region of interest, namely the searched region to be segmented, such as the white coil part shown in fig. 3, and using ZkRepresenting the area to be segmented where the kth target is located;
s4, image segmentation: the maximum inter-class variance method is used to obtain the segmentation threshold t of each region to be segmented obtained in step S3kMaximum between-class variance
Figure BDA0002473001980000073
Is defined as:
Figure BDA0002473001980000074
wherein the content of the first and second substances,
Figure BDA0002473001980000081
Figure BDA0002473001980000082
Figure BDA0002473001980000083
Figure BDA0002473001980000084
Figure BDA0002473001980000085
Figure BDA0002473001980000086
in the above formula, nqIs the kth region to be segmented ZkThe number of pixels of the q-th gray scale among all L-level gray scales, n being ZkThe total number of pixels in;
after the inter-class variance of all possible gray levels of each region to be divided is obtained, the following formula is used for obtaining ZkIs divided by a division threshold tk
Figure BDA0002473001980000087
The k-th target area divided is expressed as { (I, j) | I (I, j) ≧ tkAnd (i, j) ∈ Zk};
The obtained threshold values of the two regions to be divided are 6528 and 6531 respectively;
s5, creating an image tag matrix: from the segmentation result of step S4, a marker matrix M of the same size as the original image I is created for marking and storing the segmentation result, and the locations in the marker matrix corresponding to different objects in the image are marked as different markers such that:
Figure BDA0002473001980000088
wherein I (I, j) is the gray value at position (I, j) in the original image I, tkThe k-th target division threshold value indicates the region belonging to the target k in the original image I
Tk={(i,j)|M(i,j)=k}
The areas belonging to the background are indicated as
T0={(i,j)|M(i,j)=0};
S6, mapping the image labels one by one through mapping the scale parameter p (p ∈ [0,1 ] by the target by using the image label matrix in the step S5]) Control targets in the entirety of 28The range occupied by the gray scale is selected to be 0.6 (in this case)
Figure BDA0002473001980000094
) The background area and the target area in the original image are mapped to map the gradation value of the background area to [0,102 ]]Within range, the gray values of the target region are mapped [103,255 ]]A range;
for a background region, which occupies a larger range in the whole image and is not a concerned interested region, a linear mapping algorithm is adopted to reduce the calculation amount, and the mapping method is as follows:
Figure BDA0002473001980000091
wherein I' (I, j) is the gray value at the mapped position (I, j),
Figure BDA0002473001980000092
is (i, j) ∈ T in the original image0The gray value of the portion.
For a target area, the target area occupies a smaller range in the whole image and is a main concerned area, and a histogram equalization algorithm is adopted for mapping; the mapping algorithm can be expressed as:
Figure BDA0002473001980000093
wherein n isq(i, j) is (i, j) ∈ T in the original imagekThe number of pixels of the q-th gray scale with gray scale value I (I, j) in all L-level gray scales is partially, and n is (I, j) ∈ T in the original imagekThe total number of partial pixels.
After all the areas in the original image are mapped, the obtained I' is the final enhancement result, and the final enhancement result is shown in fig. 4. From fig. 4, it can be clearly distinguished that the upper left target is a bird, and the lower right target is an airplane flying head, so that the enhancement effect is obvious.
The above description is only a preferred embodiment of the present invention, and not intended to limit the present invention, and all equivalent changes and modifications made within the scope of the claims of the present invention should fall within the protection scope of the present invention.

Claims (9)

1. An infrared small target enhancement method based on interested areas and image marks is characterized by comprising the following steps: which comprises the following steps:
s1, image preprocessing: preprocessing the collected infrared image containing the infrared dim target by using a maximum median filtering algorithm, and filtering random noise of the infrared image;
s2, screening suspected targets: under the restriction of 4 constraints of depth limitation, area limitation, shape limitation and area expansion speed limitation, screening out a local maximum value region containing all targets from the image obtained in the step S1;
s3, dividing the regions to be divided: expanding each screened local maximum value region in the step S2 outwards by a certain distance, wherein the formed region is the finally determined region of interest, and the region is also the region to be segmented to be searched;
s4, image segmentation: obtaining a segmentation threshold value of each region to be segmented obtained in the step S3 by using a maximum inter-class variance method, and segmenting the region to be segmented obtained in the step S3;
s5, creating an image tag matrix: creating a marking matrix with the same size as the original image in the step S1 according to the segmentation result in the step S4, marking and storing the segmentation result, and marking the positions corresponding to different targets in the marking matrix and the image as different marks;
s6, mapping the image labels one by one: controlling the object at 2 with the object mapping scale parameter using the image tag matrix in step S58The range of gray level is mapped to the background area and all the objects in the original image by different mapping methods and parametersAnd mapping the target area, wherein the obtained image is the final enhancement result after all areas in the image are completely mapped.
2. The infrared small target enhancement method based on the interested area and the image mark as claimed in claim 1, wherein: in step S1, the image is preprocessed by using a maximum median filtering algorithm, where the maximum median filtering algorithm has the following calculation formula:
Fmax-median(i,j)=max{mid1,mid2,mid3,mid4} (1)
wherein the content of the first and second substances,
Figure FDA0002473001970000021
in the above formula, Fmax-median(i, j) is the maximum median filtering result at coordinate (i, j), mid1,mid2,mid3,mid4The median of pixels in the I direction, the j direction, the main diagonal direction and the secondary diagonal direction in the processing area respectively, the median { } is a median taking function, I (I, j) is a gray value at the position (I, j) of the original image, m is a scale parameter of a filtering window, and an odd number not less than 3 is taken.
3. The infrared small target enhancement method based on the interested area and the image mark as claimed in claim 1, wherein: in step S2, the step of screening suspected targets includes the following steps: firstly, traverse the whole image to extract all maximum values pkAnd position Dk(0) Then, the maximum value p is addedkGradually decrease to make it located in the location area Dk(0) Gradually expand outward to form a region Dk(1),Dk(2),…Dk(h),…,Dk(n) wherein Dk(h) Showing that the kth maximum reduces the h-level gray scale and then expands outwards to form a single connected domain, using sk(h) Indicating the region Dk(h) The sum of all the pixel numbers in the area D is represented by a and bk(h) Major axis and minor axis of the smallest ellipseAxis, region D where minimum h meets the following four constraintsk(h) I.e. the area where all possible suspected objects are located:
Figure FDA0002473001970000022
wherein h islimFor depth constraint limits, slimFor area constraint limits, Δ slimFor area expansion rate constraint limit,/limIs the shape constraint limit.
4. The infrared small target enhancement method based on the interested area and the image mark as claimed in claim 1, wherein: in step S3, the local maximum region is expanded outward by a distance generally selected such that the expanded region area is the region D found in step S2k(h) 2-5 times of the area.
5. The infrared small target enhancement method based on the interested area and the image mark as claimed in claim 1, wherein: in step S4, a maximum inter-class variance method is used to determine a segmentation threshold t for each region to be segmentedkMaximizing the between-class variance
Figure FDA0002473001970000031
Is defined as:
Figure FDA0002473001970000032
wherein the content of the first and second substances,
Figure FDA0002473001970000033
Figure FDA0002473001970000034
Figure FDA0002473001970000035
Figure FDA0002473001970000036
Figure FDA0002473001970000037
Figure FDA0002473001970000038
in the above formula, nqIs the kth region to be segmented ZkThe number of pixels of the q-th gray scale among all L-level gray scales, n being ZkThe total number of pixels in;
after the inter-class variance of all possible gray levels of each region to be divided is obtained, the following formula is used for obtaining ZkIs divided by a division threshold tk
Figure FDA0002473001970000039
The k-th target area divided is expressed as { (I, j) | I (I, j) ≧ tkAnd (i, j) ∈ Zk}。
6. The infrared small target enhancement method based on the interested area and the image mark as claimed in claim 1, wherein: in step S5, the specific operation of creating the image tag matrix is: creating a marking matrix M of the same size as the original image I for marking and storing the segmentation results such that:
Figure FDA0002473001970000041
wherein I (I, j) is the gray value at position (I, j) in the original image I, tkThe k-th target division threshold value indicates the region belonging to the target k in the original image I
TkAn area belonging to the background { (i, j) | M (i, j) ═ k } is represented as
T0={(i,j)|M(i,j)=0}。
7. The method of claim 1, wherein the step S6 of mapping the image markers one by one is performed by using a target mapping scale parameter p (p ∈ [0,1 ] as a target mapping scale parameter p]) Control targets in the entirety of 28The range occupied by the gray level maps the background area and the target area in the original image respectively to make the gray level value of the background area map to
Figure FDA0002473001970000043
Within range, grey value mapping of target area
Figure FDA0002473001970000045
Within the range of the symbol
Figure FDA0002473001970000044
Represents rounding-down, and the parameter p is within the range of 0.5-0.9.
8. The method for enhancing infrared small and weak target based on region of interest and image mark as claimed in claim 7, wherein: in step S6, a linear mapping algorithm is applied to the background region, and the mapping method is as follows:
Figure FDA0002473001970000042
wherein I' (I, j) is the gray value at the mapped position (I, j),
Figure FDA0002473001970000046
is (i, j) ∈ T in the original image0The gray value of the portion.
9. The method for enhancing infrared small and weak target based on region of interest and image mark as claimed in claim 7, wherein: in step S6, a histogram equalization algorithm is used to map the target region; the mapping algorithm can be expressed as:
Figure FDA0002473001970000051
wherein n isq(i, j) is (i, j) ∈ T in the original imagekThe number of pixels of the q-th gray scale with gray scale value I (I, j) in all L-level gray scales is partially, and n is (I, j) ∈ T in the original imagekThe total number of partial pixels.
CN202010354472.2A 2020-04-29 2020-04-29 Infrared weak and small target enhancement method based on region of interest and image mark Active CN111709885B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010354472.2A CN111709885B (en) 2020-04-29 2020-04-29 Infrared weak and small target enhancement method based on region of interest and image mark

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010354472.2A CN111709885B (en) 2020-04-29 2020-04-29 Infrared weak and small target enhancement method based on region of interest and image mark

Publications (2)

Publication Number Publication Date
CN111709885A true CN111709885A (en) 2020-09-25
CN111709885B CN111709885B (en) 2023-04-07

Family

ID=72536852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010354472.2A Active CN111709885B (en) 2020-04-29 2020-04-29 Infrared weak and small target enhancement method based on region of interest and image mark

Country Status (1)

Country Link
CN (1) CN111709885B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361321A (en) * 2021-04-21 2021-09-07 中山大学 Infrared small target detection method and device
CN113689366A (en) * 2021-08-30 2021-11-23 武汉格物优信科技有限公司 Temperature width dynamic adjustment method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040151383A1 (en) * 2002-11-22 2004-08-05 Stmicroelectronics, S.R.L. Method for the analysis of micro-array images and relative device
US20150248579A1 (en) * 2013-12-24 2015-09-03 Huazhong University Of Science And Technology Method for identifying and positioning building using outline region restraint of mountain
CN110751667A (en) * 2019-09-17 2020-02-04 中国科学院上海技术物理研究所 Method for detecting infrared dim small target under complex background based on human visual system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040151383A1 (en) * 2002-11-22 2004-08-05 Stmicroelectronics, S.R.L. Method for the analysis of micro-array images and relative device
US20150248579A1 (en) * 2013-12-24 2015-09-03 Huazhong University Of Science And Technology Method for identifying and positioning building using outline region restraint of mountain
CN110751667A (en) * 2019-09-17 2020-02-04 中国科学院上海技术物理研究所 Method for detecting infrared dim small target under complex background based on human visual system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王刚等: "采用图像块对比特性的红外弱小目标检测", 《光学精密工程》 *
薛联凤等: "感兴趣区域提取方法的研究", 《计算机与信息技术》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361321A (en) * 2021-04-21 2021-09-07 中山大学 Infrared small target detection method and device
CN113361321B (en) * 2021-04-21 2022-11-18 中山大学 Infrared small target detection method and device
CN113689366A (en) * 2021-08-30 2021-11-23 武汉格物优信科技有限公司 Temperature width dynamic adjustment method and device

Also Published As

Publication number Publication date
CN111709885B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN108510467B (en) SAR image target identification method based on depth deformable convolution neural network
CN109583425B (en) Remote sensing image ship integrated recognition method based on deep learning
CN104036239B (en) Fast high-resolution SAR (synthetic aperture radar) image ship detection method based on feature fusion and clustering
CN108805904B (en) Moving ship detection and tracking method based on satellite sequence image
CN109145872B (en) CFAR and Fast-RCNN fusion-based SAR image ship target detection method
CN103714541B (en) Method for identifying and positioning building through mountain body contour area constraint
CN109117802B (en) Ship detection method for large-scene high-resolution remote sensing image
CN107767400B (en) Remote sensing image sequence moving target detection method based on hierarchical significance analysis
CN102867196B (en) Based on the complicated sea remote sensing image Ship Detection of Gist feature learning
CN109840483B (en) Landslide crack detection and identification method and device
CN104361582B (en) Method of detecting flood disaster changes through object-level high-resolution SAR (synthetic aperture radar) images
CN106651880B (en) Offshore moving target detection method based on multi-feature fusion thermal infrared remote sensing image
Xia et al. A novel algorithm for ship detection based on dynamic fusion model of multi-feature and support vector machine
CN101609504A (en) A kind of method for detecting, distinguishing and locating infrared imagery sea-surface target
CN110060273B (en) Remote sensing image landslide mapping method based on deep neural network
CN111709885B (en) Infrared weak and small target enhancement method based on region of interest and image mark
Xia et al. A novel sea-land segmentation algorithm based on local binary patterns for ship detection
CN112699967B (en) Remote airport target detection method based on improved deep neural network
CN103996198A (en) Method for detecting region of interest in complicated natural environment
CN108229342A (en) A kind of surface vessel target automatic testing method
CN112381870B (en) Binocular vision-based ship identification and navigational speed measurement system and method
CN101944233A (en) Method for quickly extracting airport target in high-resolution remote sensing image
CN109829423B (en) Infrared imaging detection method for frozen lake
CN110428425B (en) Sea-land separation method of SAR image based on coastline vector data
CN106780525A (en) Optical remote sensing image ship direction feature extraction method based on coordinate rotation minimum circumscribed rectangle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant