CN111368585B - Weak and small target detection method, detection system, storage device and terminal equipment - Google Patents

Weak and small target detection method, detection system, storage device and terminal equipment Download PDF

Info

Publication number
CN111368585B
CN111368585B CN201811591910.6A CN201811591910A CN111368585B CN 111368585 B CN111368585 B CN 111368585B CN 201811591910 A CN201811591910 A CN 201811591910A CN 111368585 B CN111368585 B CN 111368585B
Authority
CN
China
Prior art keywords
image
weak
information processing
extracting
structural component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811591910.6A
Other languages
Chinese (zh)
Other versions
CN111368585A (en
Inventor
张新
赵尚男
王灵杰
吴洪波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Original Assignee
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Optics Fine Mechanics and Physics of CAS filed Critical Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority to CN201811591910.6A priority Critical patent/CN111368585B/en
Publication of CN111368585A publication Critical patent/CN111368585A/en
Application granted granted Critical
Publication of CN111368585B publication Critical patent/CN111368585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a weak and small target detection method, a detection system, a storage device and terminal equipment, wherein the weak and small target detection method comprises the following steps: acquiring an image; performing primary information processing on the image, and primarily detecting a weak and small target; then, for the image after primary information processing, respectively extracting structural component characteristics and high-frequency component characteristics in a space domain channel and a frequency domain channel; integrating the structural component features and the high frequency component features into output image features; and extracting the weak and small target according to the output image characteristics. The invention enhances clutter suppression capability and improves the excellence of detection effect.

Description

Weak and small target detection method, detection system, storage device and terminal equipment
Technical Field
The invention belongs to the field of infrared image target detection and tracking, and particularly relates to a weak and small target detection method, a detection system, a storage device and terminal equipment.
Background
With the wide application of infrared imaging technology, infrared target detection and tracking technology is widely applied in the fields of traffic, medical treatment, security protection, military and the like. The main problems in the infrared target detection and tracking field are as follows: (1) target weakness. When the size of the object in the image is below 9 x 9 pixels and the signal to noise ratio of the object to the background is below 4dB, the object is then considered a weak object. The contrast between the weak target and the surrounding background is low, and the weak target has no shape characteristic and is difficult to detect from the complex background. (2) background is complex. On the one hand, noise, blurring, shading and the like appear in the infrared image due to the relative motion of smoke, cloud layers, targets and carriers and the interference of human factors, so that the infrared background clutter is increasingly complex.
Based on the above problems, the conventional infrared target detection methods at present include a spatial domain high-pass filtering method, a Butterworth frequency domain filtering method, a maximum average filtering method, a morphological method and the like, but have limited detection capability for weak and small targets under complex background conditions. In addition, with the development of neuroscience and brain science, aiming at the problem of infrared target detection under a complex background, a human visual feature integration mechanism is applied to the field of infrared target detection, so that the anti-interference capability of an infrared weak and small target detection algorithm is improved. The human visual information processing has the characteristics of parallelism and serial, and the integration mechanism of the visual information is a multistage synchronous process, and the visual information is divided into the characteristics of shape, spatial frequency, direction, contrast and the like to be processed respectively and then integrated to form complete visual perception. The application of the human visual system (Human Visual System, HVS) in the infrared target detection field has achieved a series of achievements, and the existing target detection algorithm based on the HVS mechanism (such as side suppression, receptive field and pulse issuing mechanism) has significantly improved the robustness, background clutter and noise suppression capability of the infrared target detection algorithm, but has certain limitations in clutter suppression capability, detection effect, robustness and target adaptability.
Disclosure of Invention
In view of the above, the present invention provides a method, a system, a storage device and a terminal device for detecting a weak and small target, so as to solve the problem of limitations in clutter suppression capability, detection effect and robustness in the prior art.
The first aspect of the invention provides a method for detecting a weak and small target, which is applied to the field of infrared images, and comprises the following steps:
s1, acquiring an image;
s2, performing primary information processing on the image, and primarily detecting a weak and small target;
s3, respectively extracting structural component characteristics and high-frequency component characteristics of the primary information processed image in a space domain channel and a frequency domain channel;
s4, integrating the structural component features and the high-frequency component features into output image features;
s5, extracting the weak and small targets according to the output image characteristics.
Further, the image is an infrared image.
Further, the primary information processing method comprises the following steps: the primary information processing is performed on the image using the DOG model of the retinal ganglion cell receptive field.
Further, the step S2 includes:
establishing a DOG model filter template of a retina nerve cell receptive field;
and carrying out filtering processing on the image by using the filter template to preliminarily obtain the weak and small target.
Further, the filter template is utilized to carry out filter processing on each pixel point in the image through convolution operation.
Further, the DOG model has the following formula
Figure GDA0004090134690000021
Wherein (x, y) is the coordinates, σ, of any point in the image 1 To determine the mean square error, sigma, of the gaussian function of the low cut-off frequency of the filter 2 To determine the mean square error of the gaussian function for the high cut-off frequency of the filter.
Further, the filter template is utilized to carry out filter processing on each pixel point in the image through convolution operation.
Further, the formula adopted by the filter template to perform the filtering processing on each pixel point in the image through convolution operation is as follows:
Figure GDA0004090134690000031
wherein R (x, y) is the gray distribution of the output image, DOG (x, y) is the DOG model function, I (x, y) is the gray distribution of the input image, M and N are the number of pixels in the horizontal direction and the vertical direction in the image, and x τ And y τ And the coordinates of any point in the image filtering process are respectively obtained.
Further, the step S3 includes:
for the image processed by the primary information, respectively extracting structural component characteristics and high-frequency component characteristics of the image in a space domain channel and a frequency domain channel;
constructing a second-order differential Hessian matrix according to the image subjected to primary information processing in a space domain channel, and extracting the structural component characteristics of the image in the space domain channel;
and in the frequency domain channel, carrying out wavelet transformation according to the image subjected to primary information processing, and extracting the high-frequency component characteristics of the image in the frequency domain channel.
Further, the method for constructing the second-order differential Hessian matrix according to the image after the primary information processing comprises the following steps:
according to the information of each pixel point of the image subjected to primary information processing, constructing a second-order differential Hessian matrix by using the following formula:
Figure GDA0004090134690000032
wherein D is xx (x, y) is a second order differential operator in the horizontal direction, D yy (x, y) is a second order differential operator in the vertical direction, D xy (x, y) is a second order differential operator in the 45 direction.
Further, after constructing the second-order differential Hessian matrix according to the pixel point information of the image after the primary information processing, the method further includes:
calculating a straight trace Tr_H and a determinant det_H of the second-order differential Hessian matrix according to the eigenvalue of the second-order differential Hessian matrix, and judging a local extremum in an image by utilizing the straight trace Tr_Hu and the determinant det_H, wherein the judging method comprises the following steps:
according to D xx (x, y) and Det_H for local extremum determination, if Det_H > 0 and D xx (x 0 ,y 0 ) < 0, then point (x 0 ,y 0 ) Is a local maximum point, namely a point target in the image; if Det_H > 0 and D xx (x 0 ,y 0 ) > 0, then the (x 0 ,y 0 ) Is a local minimum point; if Det_H < 0, then the (x 0 ,y 0 ) Is a saddle point; if det_h=0, then point (x 0 ,y 0 ) Is the critical point.
And finally, extracting the structural component characteristics of the space domain channel, and extracting the structural component characteristics containing the weak and small targets by using the following formula.
Figure GDA0004090134690000041
Where Q (x, y) is a gray value of the structural component feature image.
Further, in the frequency domain channel, extracting the high frequency component features of the image using wavelet transform includes:
performing secondary decomposition by wavelet transformation according to the image frequency domain subjected to primary information processing to obtain a wavelet transformation coefficient vector, setting an approximate coefficient matrix in the wavelet transformation coefficient vector to 0, calculating inverse transformation of the wavelet transformation coefficient vector, and extracting the high-frequency component characteristics of the weak and small target according to the absolute value of the inverse transformation.
Further, the step S4 includes:
and performing point multiplication integration on the structural component characteristics of the airspace channel and the high-frequency component characteristics of the frequency domain channel to obtain the output image characteristics.
The second aspect of the present invention provides a weak and small object detection system, applied to the field of infrared images, the system comprising:
the image module is used for reading or acquiring images;
the information processing module is used for performing primary information processing on the image and preliminarily detecting a weak and small target;
the feature extraction module is used for extracting structural component features and high-frequency component features of the image subjected to primary information processing in a space domain channel and a frequency domain channel respectively, integrating the structural component features and the high-frequency component features into output image features, and extracting the weak and small targets according to the structural component features and the high-frequency component features;
the image processing module is electrically connected with the information processing module, and the information processing module is electrically connected with the feature extraction module.
A third aspect of the present invention provides a storage device comprising:
a software unit and a storage unit;
the software unit is used for executing the weak small target detection method in any step;
the storage unit is used for storing the software unit.
A fourth aspect of the present invention provides a terminal device, including a storage unit, a processor, and a software program stored in the storage unit and running on the processor, where the processor implements the method for detecting a small target in any of the steps described above when executing the software program.
Compared with the prior art, the invention has the beneficial effects that:
the method comprises the steps of firstly, performing primary information processing on an image by using a DOG model, and primarily filtering out partial background clutter; then, in the airspace channel, judging the gray distribution of the image according to the straight trace Tr_H and determinant det_H of the second-order differential Hessian matrix, and extracting the structural component characteristics containing weak and small targets according to the judging result; and in the frequency domain channel, performing secondary decomposition by utilizing wavelet transformation according to the image information subjected to primary information processing to obtain a wavelet transformation coefficient vector, setting an approximate coefficient matrix in the wavelet transformation coefficient vector to be 0, then calculating inverse transformation of the wavelet transformation coefficient vector, and extracting the high-frequency component characteristics containing a weak and small target according to the absolute value of the inverse transformation. And complex background clutter is restrained again in the spatial domain channel and the frequency domain channel respectively, so that clutter suppression capability is enhanced.
The invention is based on the excellent characteristics of human visual characteristic integration mechanism, and applies the mechanism to weak and small target detection under complex background by simulation and improvement. First, the infrared image is processed by primary information by using a mathematical model (DOG model) of the retina receptive field, and a weak and small target is primarily detected. Then, the method is divided into a space domain channel and a frequency domain channel, and the structural characteristics and the high-frequency component characteristics of the weak and small targets are respectively extracted. And finally, integrating the component characteristics of the space domain channel and the frequency domain channel, and extracting the weak and small target under the complex background. Therefore, the method not only can fully inhibit the background clutter, but also can enhance the weak and small target, and has good detection effect.
The method, the system, the storage device and the terminal equipment for detecting the weak and small targets based on visual feature integration disclosed by the invention have the advantages of no need of threshold adjustment, stable performance and strong robustness, and are suitable for images containing air, sea surface or ground and other backgrounds.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a first embodiment of a method for detecting a small target according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a refinement flow of a second embodiment of a weak small target detection method provided by the embodiment of the present invention;
FIG. 3 is a graph showing the comparison of the detection results of the weak and small target detection method provided by the embodiment of the invention and the detection results of 5 existing methods;
FIG. 4 is a schematic block diagram of a small and weak target detection system according to an embodiment of the present invention;
FIG. 5 is a schematic block diagram of a memory device according to an embodiment of the present invention;
fig. 6 is a schematic block diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to illustrate the technical scheme of the invention, the following description is made by specific examples.
In the embodiment of the invention, first, a mathematical model DOG (differential-of-Gaussian) of a retinal ganglion cell receptive field is utilized to perform primary information processing on an infrared image, and a weak and small target is primarily detected. And then, respectively extracting structural component characteristics and high-frequency component characteristics of the primary information processed image in a space domain channel and a frequency domain channel. Specifically, in a space channel, constructing a second-order differential Hessian matrix by utilizing the information of each pixel point of the image after primary information processing, calculating the straight trace and determinant of the Hessian matrix to judge a local extremum, and further extracting the structural component characteristics containing a weak and small target; and in the frequency domain channel, performing secondary decomposition on the image frequency domain processed by the primary information by utilizing wavelet transformation, and further extracting high-frequency component characteristics containing a weak and small target. And finally, integrating the component characteristics of the space domain channel and the frequency domain channel, and extracting the weak and small target under the complex background. Because the DOG model is utilized to perform primary information processing on the image, partial background clutter is primarily filtered; furthermore, the structure characteristics and the high-frequency component characteristics of the weak and small targets are respectively extracted from the two channels which are divided into the space domain and the frequency domain, complex background clutter can be restrained again, the weak and small targets can be enhanced, and the detection effect is good. The embodiment of the invention does not need threshold adjustment, has stable performance and strong robustness, and is suitable for images containing air, sea surface or ground background and the like.
In the embodiment of the invention, the execution main body of the process is terminal equipment, and the terminal equipment comprises, but is not limited to, a notebook computer, a server, a tablet personal computer, a smart phone and other terminal equipment with model building and data processing functions. Particularly, when the terminal equipment executes the flow in the implementation of the invention, large-scale simulation software can be run and used for displaying the detection result of the weak and small targets. The terminal equipment of the embodiment of the invention is a PC with Inter (R) CPU of 2.7GHz and 4.00G, and MATLAB2012b platform is operated on the PC.
Fig. 1 is a schematic flow chart of a first embodiment of a weak target detection method according to the present invention, which is described in detail below:
s1, acquiring an image.
In this embodiment, the terminal device actively acquires or shoots an infrared image and stores the infrared image in the PC, and the experimental image is an original image with a target size of less than 9×9 pixels, a signal-to-noise ratio of less than 4dB, and a contrast of less than 15%.
S2, performing primary information processing on the original image, and primarily detecting a weak and small target.
In this embodiment, the above-mentioned infrared image is subjected to primary information processing using a mathematical model of the retinal ganglion cell receptive field (DOG model), and a small target is detected preliminarily. Firstly, a DOG filter template is established, and it is to be noted that retinal ganglion cells present concentric circle antagonism, and a mathematical model is represented by a DOG model, wherein the function expression form of the DOG model is as follows:
Figure GDA0004090134690000081
wherein (x, y) is the coordinates, σ, of any point in the image 1 To determine the mean square error, sigma, of the gaussian function of the low cut-off frequency of the filter 2 To determine the mean square error of the gaussian function for the high cut-off frequency of the filter.
Secondly, discretizing the DOG function to obtain a filter template with the size of N multiplied by N, wherein N is determined according to the speed and the precision of the filter processing. The larger N is, the higher the accuracy is, i.e., the higher the high frequency point in the image is, the better the gray scale expressive force is, but the processing speed is slower. Finally, filtering the image by using a filtering template, specifically, filtering each pixel point of the infrared image by using a convolution template to obtain image information, and initially detecting a weak and small target, wherein the adopted formula is as follows:
Figure GDA0004090134690000082
wherein R (x, y) is the gray distribution of the output image, DOG (x, y) is the DOG model function, I (x, y) is the gray distribution of the input image, M and N are the number of pixels in the horizontal direction and the vertical direction in the infrared image, and x τ And y τ And the coordinates of any point in the image filtering process are respectively obtained.
S3, for the image after primary information processing, respectively extracting structural component characteristics and high-frequency component characteristics in a space domain channel and a frequency domain channel.
In the embodiment of the invention, the structural component characteristics and the high-frequency component characteristics of the weak and small targets are respectively extracted through the space domain channel and the frequency domain channel. Constructing a second-order differential Hessian matrix in a space domain channel, and extracting structural component characteristics of the image; in the frequency domain channel, the high frequency component characteristics of the image are extracted by wavelet transformation.
And S4, integrating the structural component features and the high-frequency component features into output image features.
In the embodiment of the invention, the structural component characteristic containing the weak and small target obtained in the step S3 and the high-frequency component characteristic containing the weak and small target obtained in the step S3 are subjected to dot product integration to obtain the output image characteristic.
S5, extracting the weak and small targets according to the output image features.
In the embodiment of the invention, the weak and small targets under the complex background are extracted according to the output image characteristics in the step S4.
Fig. 2 is a schematic diagram of a refinement flow of a second embodiment of the weak target detection method according to the embodiment of the present invention. Referring to fig. 2, based on the embodiment of fig. 1, the method for detecting a weak and small target provided in this embodiment is specifically described as follows:
s1, acquiring an image; and S2, performing primary information processing on the original image, and primarily detecting a weak and small target. The specific embodiment is consistent with the technical scheme adopted in the first embodiment, and will not be described again.
S3, for the image after primary information processing, respectively extracting structural component characteristics and high-frequency component characteristics in a space domain channel and a frequency domain channel.
Specifically, the specific step of extracting the structural component characteristics of the weak and small target through the airspace channel comprises the following steps:
s301, constructing a second-order differential Hessian matrix.
In the embodiment of the invention, a second-order differential Hessian matrix is constructed according to the information of each point of the image after primary information processing in the step S2, and the adopted formula is as follows:
Figure GDA0004090134690000091
wherein D is xx (x, y) is a second order differential operator in the horizontal direction, D yy (x, y) is a second order differential operator in the vertical direction, D xy (x, y) is a second order differential operator in the 45 direction.
S302, a straight trace Tr_H and a determinant det_H of a second-order differential Hessian matrix are calculated.
In the embodiment of the invention, the formula adopted for calculating the straight trace Tr_H and the determinant det_H of the Hessian matrix is as follows:
Figure GDA0004090134690000092
wherein lambda is 1 And lambda (lambda) 2 Is the eigenvalue of the Hessian matrix。
S303, judging the local extremum in the image by utilizing the straight trace Tr_Hu and the determinant det_H.
Judging a local extremum in an image by using the straight trace Tr_Hu and the determinant det_H, wherein the judging method comprises the following steps:
according to D xx (x, y) and Det_H for local extremum determination, if Det_H > 0 and D xx (x 0 ,y 0 ) < 0, then point (x 0 ,y 0 ) Is a local maximum point, namely a point target in the image; if Det_H > 0 and D xx (x 0 ,y 0 ) > 0, then the (x 0 ,y 0 ) Is a local minimum point; if Det_H < 0, then the (x 0 ,y 0 ) Is a saddle point; if det_h=0, then point (x 0 ,y 0 ) Is the critical point.
S304, extracting gray values of structural component features of the weak and small targets.
Specifically, the gray value of the structural component feature of the weak and small target is calculated by using the following formula:
Figure GDA0004090134690000101
where Q (x, y) is the gray value of the structural component feature image of the weak and small object.
Specifically, the specific step of extracting the high-frequency component characteristics of the image subjected to primary information processing through the frequency domain channel comprises the following steps:
s311, performing secondary decomposition by utilizing wavelet transformation.
In the embodiment of the present invention, the image obtained in step S2 is subjected to secondary decomposition by using wavelet transformation, and the wavelet transformation coefficient vector of the image subjected to primary information processing after decomposition includes: an approximation coefficient matrix and 2 sets of row, column, diagonal detail coefficient matrices, wherein the approximation coefficient matrix represents a low frequency background portion of the image and the row, column, diagonal detail coefficient matrices represent a high frequency portion of the image. Further, in this embodiment, sym4 wavelet is selected to perform a secondary decomposition on the frequency domain of the image, and the wavelet type in the wavelet transform may be a wavelet in a wavelet family such as haar, sym, db, or the like.
S312, the approximation coefficient matrix in the wavelet transform coefficient vector is set to 0.
S313, calculating the inverse transformation of the wavelet transformation coefficient vector and taking the absolute value of the wavelet transformation coefficient vector.
Specifically, the inverse transform of the wavelet transform coefficient vector is calculated according to the approximation coefficient matrix in step S312, and the absolute value thereof is taken.
S314, extracting high-frequency component characteristics of the weak and small target.
A high-frequency component characteristic image F (x, y) containing a weak target is obtained from the absolute value in step S313.
S4, integrating the structural component features and the high-frequency component features into output image features.
In the embodiment of the invention, the formula adopted for carrying out dot multiplication on the structural component characteristics containing the weak and small targets obtained in the step S3 and the high-frequency component characteristics containing the weak and small targets obtained in the step S3 to obtain the output image characteristics is as follows:
Out(x,y)=Q(x,y)·F(x,y)
out (x, y) is the final output image.
S5, extracting the weak and small targets according to the output image features.
Fig. 3 shows a comparison of detection results of a weak and small target detection method and 5 existing methods in an embodiment of the present invention.
The images in the diagrams (a) - (f) are an original image, a morphological method, a Max-mean (maximum mean value filtering method), a Max-mean (maximum median filtering method), a TDLMS (two-dimensional minimum mean square error method), a target detection method based on side suppression and a detection result of the method from left to right. The method can well inhibit the interference of background noise wave, and has good detection performance under the condition that the target signal of the original image is very weak, and the effect is better than that of other 5 comparison methods.
Fig. 4 shows a schematic block diagram of a weak object detection system 100 according to an embodiment of the present invention.
Specifically, the present system 100 includes: an image module 1 for reading or acquiring an image; the information processing module 2 is used for performing primary information processing on the image and preliminarily detecting a weak and small target; the feature extraction module 3 is used for acting on the image processed by the primary information, respectively extracting structural component features and high-frequency component features in a space domain channel and a frequency domain channel, integrating the structural component features and the high-frequency component features into output image features, and extracting the weak and small targets according to the structural component features and the high-frequency component features; the image processing module 1 is electrically connected with the information processing module 2, and the information processing module 2 is electrically connected with the feature extraction module 3.
Fig. 5 shows a schematic block diagram of a memory device 200 according to an embodiment of the invention.
Specifically, the device 200 includes a software unit 201 and a storage unit 202, where the software unit 201 is set for implementing the weak and small target detection method provided in the embodiments of fig. 1 and 2; the storage unit 201 may be a memory, or may be a storage medium capable of storing electronic data, such as DRAM pellets or NAND pellets. The software unit 201 is preferably stored in the storage unit, and may be integrated in the storage device 200 in other forms, for example, in a form of burning, so that the storage device 200 may conveniently implement the small and weak target detection function.
It should be noted that, first, the software unit 201 in the present storage device 200 can be correctly read by a computer or other devices having an operation/calculation function. Of course, the storage device 200 may be in a read-only mode or a readable or writable mode, and this may be an ordinary storage device including the software unit 201, and the user may also store other data in the storage unit 202 synchronously. For example, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), and a solid state memory hard disk (Solid State Drive, SSD) which are commonly used at present are available. Alternatively, in order to ensure the security and reliability of the present storage device 200, it is preferable that the present storage device 200 is a read-only device.
For example, the software unit 201 may be in an encrypted state, when a computer or other device with an operation/calculation function reads the software unit, a user needs to input a password for verification, and after the verification is successful, the computer or other device with an operation/calculation function can correctly read the software unit 201 and perform the detection for achieving the weak target. Alternatively, the software program in the software unit 201 may be an open source code, so that those skilled in the art can modify and perfect the program continuously to implement the weak target detection method described in the embodiments of fig. 1 and fig. 2 at a faster and better speed.
Fig. 6 shows a schematic block diagram of a terminal device 300 according to an embodiment of the present invention.
Specifically, the present terminal device 300 includes: the processor 303, the storage unit 302, and the software program 301 stored in the storage unit 302 and executable on the processor 303 may be, for example, a software program implementing a method for detecting a small object. The processor 303, when executing the software program 301, implements the method of all the small object detection embodiments described above, such as steps S1 to S5 shown in fig. 1. Alternatively, the processor 303 may implement the functions of the modules/units in the embodiment of the detection system 100 described above, such as the functions of the modules 1 to 3 shown in fig. 4, when executing the software program 301.
Illustratively, the software machine 301 may be partitioned into one or more modules/units that are stored in the memory unit 302 and executed by the processor 303 to accomplish the present invention. The one or more modules/units may be a series of software program instruction segments capable of performing a specific function for describing the execution of the software program 301 in the small object detection system 100 and the terminal device 300. For example, the software program 301 may be divided into an image module, an information processing module, and a feature extraction module (a module in the virtual detection system), each of which specifically functions as follows:
the image module is used for reading or acquiring images; the information processing module is used for performing primary information processing on the image and preliminarily detecting a weak and small target; and the feature extraction module is used for extracting structural component features and high-frequency component features of the primary information processed image in a space domain channel and a frequency domain channel respectively, integrating the structural component features and the high-frequency component features into output image features, and extracting the weak and small targets according to the structural component features and the high-frequency component features.
The terminal device 300 may be a computing device such as a desktop computer, a notebook computer, a palm computer, and a cloud server. The terminal 300 device may include, but is not limited to, a processor 303, a memory unit 302. It will be appreciated by those skilled in the art that fig. 6 is merely an example of a terminal device 300 and is not meant to be limiting as to the terminal device 300, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the terminal device 6 may also include input and output devices, network access devices, buses, etc.
The processor 303 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage unit 302 may be an internal storage unit of the terminal device 300, for example, a hard disk or a memory of the terminal device 300. The storage unit 302 may also be an external storage device of the terminal device 300, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like. Further, the storage unit 302 may also include both an internal storage unit and an external storage device of the terminal device 6. The storage unit 302 is used for storing the computer program and other programs and data required by the terminal device. The storage unit 302 may also be used for temporarily storing data during computation or for caching data.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative steps and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the method, the system, the storage device, and the terminal device for detecting a small object are preferably applied to the field of infrared images, and are not necessarily not applicable to detection of a small object in other types of images, that is, the detection method, the storage device, and the terminal device for detecting a small object in a gray-scale image and a normal image still fall within the protection scope of the present patent without any creative effort of those skilled in the art. In addition, the method, the system, the storage device and the terminal equipment disclosed by the invention can be realized in other modes. For example, the system embodiments described above are merely illustrative, e.g., the division of the modules is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate modules may or may not be physically separate, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (7)

1. The method for detecting the weak and small target is applied to the field of infrared images and is characterized by comprising the following steps of:
s1, acquiring an image;
s2, performing primary information processing on the image, and primarily detecting a weak and small target;
s3, respectively extracting structural component characteristics and high-frequency component characteristics of the primary information processed image in a space domain channel and a frequency domain channel;
s4, integrating the structural component features and the high-frequency component features into output image features;
s5, extracting the weak and small targets according to the output image characteristics;
in step S2, firstly, a DOG filter template is established, retinal ganglion cells present concentric circles of receptor fields, and the mathematical model is represented by a DOG model, wherein the function expression form of the DOG model is:
Figure FDA0004090134670000011
wherein (x, y) is the coordinates, σ, of any point in the image 1 To determine the mean square error, sigma, of the gaussian function of the low cut-off frequency of the filter 2 To determine the mean square error of the gaussian function of the filter high cut-off frequency;
secondly, discretizing the DOG function to obtain a filtering template with the size of N multiplied by N, wherein N is determined according to the filtering processing speed and the accuracy; the larger N is, the higher the precision is, namely the higher the high-frequency point in the image is, the better the gray level expressive force is, but the processing speed is slower; finally, filtering the image by using a filtering template, filtering each pixel point of the infrared image by using a convolution template to obtain image information, and primarily detecting a weak and small target, wherein the adopted formula is as follows:
Figure FDA0004090134670000012
wherein R (x, y) is the gray distribution of the output image, DOG (x, y) is the DOG model function, I (x, y) is the gray distribution of the input image, M and N are the number of pixels in the horizontal direction and the vertical direction in the infrared image, and x τ And y τ Coordinates of any point in the image filtering process are respectively obtained;
in step S3, for the image after primary information processing, constructing a second-order differential Hessian matrix in the spatial channel, and extracting the structural component characteristics of the image;
extracting the high-frequency component characteristics of the image by wavelet transformation in the frequency domain channel for the image after primary information processing;
the in-airspace channel is used for constructing a second-order differential Hessian matrix, extracting structural component characteristics of the image, and the method further comprises the following steps:
calculating a straight trace Tr_H and a determinant det_H of the second-order differential Hessian matrix according to the characteristic value of the second-order differential Hessian matrix;
according to the information of each pixel point of the image subjected to primary information processing, constructing a second-order differential Hessian matrix by using the following formula:
Figure FDA0004090134670000021
wherein D is xx (x, y) is a second order differential operator in the horizontal direction, D yy (x, y) is a second order differential operator in the vertical direction, D xy (x, y) is a second order differential operator in the 45 DEG direction;
after constructing the second-order differential Hessian matrix according to the pixel point information of the image subjected to the primary information processing, the method further comprises the following steps:
calculating a straight trace Tr_H and a determinant det_H of the second-order differential Hessian matrix according to the eigenvalue of the second-order differential Hessian matrix, and judging a local extremum in an image by utilizing the straight trace Tr_Hu and the determinant det_H, wherein the judging method comprises the following steps:
according to D xx (x, y) and Det_H for local extremum determination, if Det_H > 0 and D xx (x 0 ,y 0 ) < 0, then point (x 0 ,y 0 ) Is a local maximum point, namely a point target in the image; if Det_H > 0 and D xx (x 0 ,y 0 ) > 0, then the (x 0 ,y 0 ) Is a local minimum point; if Det_H < 0, then the (x 0 ,y 0 ) Is a saddle point; if det_h=0, then point (x 0 ,y 0 ) Is a critical point;
judging a local extremum in the image by utilizing the straight trace Tr_Hu and the determinant det_H, and further extracting the structural component characteristics of the space domain channel;
the gray value of the structural component characteristic of the weak and small target is calculated by using the following formula:
Figure FDA0004090134670000031
where Q (x, y) is the gray value of the structural component feature image of the weak and small object.
2. The method for detecting a small target according to claim 1, wherein the small target is primarily detected by performing primary information processing on each pixel point in the image through convolution operation by using a DOG model of a retinal ganglion cell receptive field.
3. The weak small object detection method of claim 1, wherein extracting high frequency component features of the image using wavelet transform in a frequency domain channel comprises:
performing secondary analysis by wavelet transformation according to the primary information processed image to obtain a wavelet transformation coefficient vector, setting an approximate coefficient matrix in the wavelet transformation coefficient vector to 0, calculating inverse transformation of the wavelet transformation coefficient vector, and extracting the high-frequency component characteristics of the weak and small target according to the absolute value of the inverse transformation.
4. The weak target detection method of claim 1, wherein the structural component features of the spatial domain channel and the high frequency component features of the frequency domain channel are point-multiplied together to obtain the output image features.
5. A detection system for implementing the weak target detection method according to any one of claims 1-4, applied in the field of infrared images, said system comprising:
the image module is used for reading or acquiring images;
the information processing module is used for performing primary information processing on the image and preliminarily detecting a weak and small target;
the feature extraction module is used for extracting structural component features and high-frequency component features of the primary information processed image in a space domain channel and a frequency domain channel respectively, integrating the structural component features and the high-frequency component features into output image features and extracting the weak and small targets according to the structural component features and the high-frequency component features;
the image processing module is electrically connected with the information processing module, and the information processing module is electrically connected with the feature extraction module.
6. A memory device, comprising:
a software unit and a storage unit;
the software unit is configured to implement the dim target detection method according to any one of claims 1-4;
the storage unit is used for storing the software unit.
7. A terminal device comprising a memory unit, a processor and a software program stored in the memory unit and running on the processor, characterized in that the processor implements the dim target detection method according to any one of claims 1-4 when executing the software program.
CN201811591910.6A 2018-12-25 2018-12-25 Weak and small target detection method, detection system, storage device and terminal equipment Active CN111368585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811591910.6A CN111368585B (en) 2018-12-25 2018-12-25 Weak and small target detection method, detection system, storage device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811591910.6A CN111368585B (en) 2018-12-25 2018-12-25 Weak and small target detection method, detection system, storage device and terminal equipment

Publications (2)

Publication Number Publication Date
CN111368585A CN111368585A (en) 2020-07-03
CN111368585B true CN111368585B (en) 2023-04-21

Family

ID=71211369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811591910.6A Active CN111368585B (en) 2018-12-25 2018-12-25 Weak and small target detection method, detection system, storage device and terminal equipment

Country Status (1)

Country Link
CN (1) CN111368585B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114595735A (en) * 2020-11-20 2022-06-07 中移动信息技术有限公司 User detection method, device, equipment and computer storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103700113A (en) * 2012-09-27 2014-04-02 中国航天科工集团第二研究院二O七所 Method for detecting dim small moving target under downward-looking complicated background
CN104036523A (en) * 2014-06-18 2014-09-10 哈尔滨工程大学 Improved mean shift target tracking method based on surf features
CN107507122A (en) * 2017-08-22 2017-12-22 吉林大学 Stereo-picture Zero watermarking method based on NSCT and SIFT
CN107886101A (en) * 2017-12-08 2018-04-06 北京信息科技大学 A kind of scene three-dimensional feature point highly effective extraction method based on RGB D

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101201937B (en) * 2007-09-18 2012-10-10 上海医疗器械厂有限公司 Digital image enhancement method and device based on wavelet reconstruction and decompose
WO2011100511A2 (en) * 2010-02-11 2011-08-18 University Of Michigan Methods for microcalification detection of breast cancer on digital tomosynthesis mammograms
CN102819740B (en) * 2012-07-18 2016-04-20 西北工业大学 A kind of Single Infrared Image Frame Dim targets detection and localization method
SG11201506229RA (en) * 2013-02-27 2015-09-29 Hitachi Ltd Image analysis device, image analysis system, and image analysis method
CN103234969B (en) * 2013-04-12 2015-03-04 江苏大学 Method for measuring fabric weft density based on machine vision
CN104299229B (en) * 2014-09-23 2017-04-19 西安电子科技大学 Infrared weak and small target detection method based on time-space domain background suppression
US9269025B1 (en) * 2015-01-29 2016-02-23 Yahoo! Inc. Object detection in images
CN104679895A (en) * 2015-03-18 2015-06-03 成都影泰科技有限公司 Medical image data storing method
US9754182B2 (en) * 2015-09-02 2017-09-05 Apple Inc. Detecting keypoints in image data
CN106251344B (en) * 2016-07-26 2019-02-01 北京理工大学 A kind of multiple dimensioned infrared target self-adapting detecting method of view-based access control model receptive field
CN106600564A (en) * 2016-12-23 2017-04-26 潘敏 Novel image enhancement method
CN107220948A (en) * 2017-05-23 2017-09-29 长春工业大学 A kind of enhanced method of retinal images
CN107403134B (en) * 2017-05-27 2022-03-11 西安电子科技大学 Local gradient trilateral-based image domain multi-scale infrared dim target detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103700113A (en) * 2012-09-27 2014-04-02 中国航天科工集团第二研究院二O七所 Method for detecting dim small moving target under downward-looking complicated background
CN104036523A (en) * 2014-06-18 2014-09-10 哈尔滨工程大学 Improved mean shift target tracking method based on surf features
CN107507122A (en) * 2017-08-22 2017-12-22 吉林大学 Stereo-picture Zero watermarking method based on NSCT and SIFT
CN107886101A (en) * 2017-12-08 2018-04-06 北京信息科技大学 A kind of scene three-dimensional feature point highly effective extraction method based on RGB D

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Chandaka Shravani等.Feature Extraction and Matching using Gabor wavelets and Hessian detector.《Fourth International Conference on Devices, Circuits and Systems (ICDCS'18)》.2018,第297-300页. *
赵耀.红外弱小目标检测算法研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2016,(第7期),第I138-1126页. *

Also Published As

Publication number Publication date
CN111368585A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
Han et al. Underwater image processing and object detection based on deep CNN method
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN108805023B (en) Image detection method, device, computer equipment and storage medium
WO2020258667A1 (en) Image recognition method and apparatus, and non-volatile readable storage medium and computer device
EP3739431A1 (en) Method for determining point of gaze, contrast adjustment method and device, virtual reality apparatus, and storage medium
CN111783083B (en) Recommendation method and device for defense algorithm
CN111340077B (en) Attention mechanism-based disparity map acquisition method and device
US20130156320A1 (en) Method, apparatus and system for determining a saliency map for an input image
CN110070115B (en) Single-pixel attack sample generation method, device, equipment and storage medium
CN112883918B (en) Face detection method, face detection device, terminal equipment and computer readable storage medium
CN110232318A (en) Acupuncture point recognition methods, device, electronic equipment and storage medium
CN110782406B (en) Image denoising method and device based on information distillation network
CN109508636A (en) Vehicle attribute recognition methods, device, storage medium and electronic equipment
CN112836756A (en) Image recognition model training method and system and computer equipment
WO2022199395A1 (en) Facial liveness detection method, terminal device and computer-readable storage medium
CN113255433A (en) Model training method, device and computer storage medium
CN111368585B (en) Weak and small target detection method, detection system, storage device and terminal equipment
CN113269752A (en) Image detection method, device terminal equipment and storage medium
CN112989932A (en) Improved prototype network-based less-sample forest fire smoke identification method and device
CN111489289B (en) Image processing method, image processing device and terminal equipment
CN106683044B (en) Image splicing method and device of multi-channel optical detection system
CN116503239A (en) Vehicle-mounted video data abnormality processing and simulation method
EP4064215A2 (en) Method and apparatus for face anti-spoofing
CN115689947A (en) Image sharpening method, system, electronic device and storage medium
CN116188956A (en) Method and related equipment for detecting deep fake face image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant