CN114494342A - Method for detecting and tracking marine target of visible light sequence image of synchronous orbit satellite - Google Patents

Method for detecting and tracking marine target of visible light sequence image of synchronous orbit satellite Download PDF

Info

Publication number
CN114494342A
CN114494342A CN202111666781.4A CN202111666781A CN114494342A CN 114494342 A CN114494342 A CN 114494342A CN 202111666781 A CN202111666781 A CN 202111666781A CN 114494342 A CN114494342 A CN 114494342A
Authority
CN
China
Prior art keywords
target
image
images
detection
targets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111666781.4A
Other languages
Chinese (zh)
Inventor
潘宗序
王乾通
胡玉新
刘方坚
韩冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Information Research Institute of CAS
Original Assignee
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Information Research Institute of CAS filed Critical Aerospace Information Research Institute of CAS
Priority to CN202111666781.4A priority Critical patent/CN114494342A/en
Publication of CN114494342A publication Critical patent/CN114494342A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting and tracking a marine target of a visible light sequence image of a synchronous orbit satellite. The method firstly carries out grouping synthesis on the multi-frame images, is convenient to detect the candidate target by utilizing the motion information of the target, and can improve the detection rate and reduce the false alarm rate; then, after the input sequence image is blocked, self-adaptive quantization preprocessing is carried out, so that the influence of brightness on target detection is reduced; then, a convolutional neural network is utilized to carry out detection model training and loading on the target, so that the method can adapt to target detection in a complex scene; and carrying out target detection on a single group of images by using the trained detection model, and carrying out target association on detection results on the multiple groups of images based on the cross-over ratio. The method automatically learns the good characteristics capable of describing the target in the data in a data driving mode, the calculation complexity linearly changes along with the number of the detected targets, the calculation efficiency is high, and the timeliness requirement can be met.

Description

Method for detecting and tracking marine target of visible light sequence image of synchronous orbit satellite
Technical Field
The invention relates to the technical field of remote sensing image target detection and sensing, in particular to a method for detecting and tracking a marine target of a visible light sequence image of a synchronous orbit satellite.
Background
The detection and tracking of marine targets is an important task for marine surveillance. With the development of satellite technology, earth observation means such as synthetic aperture radar and optical satellite play an increasingly important role in monitoring targets at sea. The low-orbit satellite images generally have higher spatial resolution, and a great deal of work has been carried out on the detection and identification of targets at sea surrounding the low-orbit satellite images. However, the low earth orbit satellite has a small single observation range and a long revisit period, and it is difficult to continuously observe the same region in a short time. Compared with a low-orbit satellite, the geosynchronous orbit satellite has lower spatial resolution of the acquired image, has the advantages of large observation range, short revisit period, quick response and the like, can acquire the sequence image of the same observation point, thereby realizing near-real-time continuous monitoring on a large-range area and playing an important role in the aspect of dynamic monitoring on a marine target.
In recent years, the difficulty of marine target monitoring based on geosynchronous orbit optical satellite images mainly has two aspects, namely that the image space resolution is low, so that the target characteristics are not obvious, and the omission is easily caused; secondly, the cloud will generate large interference to the detection, and the target is very similar to part of the garrulous cloud in shape, which easily causes false alarm.
The research of Zhang Zhixin, Xuqingjun, Zhang Chuan, Zhao Dong based on the ship motion characteristic detection of the multispectral image of a single-scene high-resolution four-number satellite [ J ] remote sensing technology and application, 2019,34(4):892 one-shot 900 ] is to respectively use a detection method based on a single-frame image on each frame image of a sequence image so as to improve the detection rate and calculate the course and the navigational speed of a target.
The implementation process of the method in the target detection and tracking method for the multi-motion ship on the orbit and sea [ J ] proceedings of university of Chinese academy of sciences, 2020,37(3): 368. 378 "is as follows: firstly, extracting candidate targets on a single-frame image by using an edge detection operator; secondly, extracting features such as the area, the length-width ratio and the like of the candidate target, and constructing a classifier by using the features so as to remove false alarms such as clouds and islands; and finally, correlating the detection results of the multiple frames of images by using a joint probability data correlation method so as to obtain the target track.
The procedure used in "L.Yao, Y.Liu, and Y.He.A Novel Ship-transporting Method for GF-4Satellite Sequential Images [ J ]. Sensors,2018,18: 1-14" was as follows: firstly, extracting a candidate target by using a significance detection method based on statistical analysis, and removing false alarms according to the size of the candidate target; secondly, calculating the geographic position of the target by adopting a rational polynomial system model; and finally, correlating the detection results of the multiple frames of images by a multi-hypothesis tracking method to obtain the track of the target.
The technical route of the existing method is summarized as follows: firstly, taking the gray difference between a target and a sea background as a basis on a single frame image, extracting candidate targets by using image processing methods such as threshold segmentation, edge extraction and the like or statistical analysis-based methods such as constant false alarm detection, significance detection and the like, removing false alarms according to geometric characteristics such as the size, the area, the length-width ratio and the like of the candidate targets, and then associating detection results of multiple frames by adopting a joint probability data association or multi-hypothesis tracking method to obtain a target track, wherein the flow chart of the method is shown in fig. 1.
The technical defects of the existing method are mainly reflected in the following four aspects: 1) the existing method does not carry out preprocessing operations such as quantization and the like on the brightness of an input image, and is easy to cause missing detection when the brightness of a target on the image is low; 2) the existing method only detects candidate targets on a single-frame image, has poor detection effect on weak and small targets, and is easy to miss detection or error detection; 3) the existing method adopts conventional image processing or statistical analysis methods such as threshold segmentation, edge extraction, statistic calculation and the like, utilizes manually designed features to detect the target, and is difficult to accurately detect the target in a complex scene with more broken clouds; 4) the existing method adopts target association based on a probability model or a multi-hypothesis tracking algorithm, the calculation complexity changes exponentially along with the number of detected targets, the calculation complexity is high, and the timeliness requirement is difficult to meet.
Disclosure of Invention
In view of the above, the invention provides a method for detecting and tracking a marine target of a visible light sequence image of a synchronous orbit satellite, which performs adaptive quantization processing on an input image, utilizes motion information of the target, can improve a detection rate and reduce a false alarm rate, automatically learns good characteristics capable of describing the target in data in a data-driven manner, has high calculation efficiency and can meet the requirement of timeliness, and the calculation complexity linearly changes along with the number of detected targets.
A method for detecting and tracking marine targets of visible light sequence images of a synchronous orbit satellite comprises the following steps:
step 1, reading sequence images in sequence and grouping; unifying the length and width of the images in the group, and splicing to form an image tensor Yi
Step 2, the image is processed in a blocking mode, the brightness in the image block is unified, and the image tensor Y is processediBlock-wise division into image block tensors yi
Step 3, constructing a neural network as a marine target detection model, wherein the input of the neural network is the image block tensor y of each groupi(ii) a Training the offshore target detection model by using the sample set to obtain a trained offshore target detection model;
step 4, inputting the image sequence to be detected into the trained marine target detection model according to groups to obtain target detection results of each group;
step 5, based on the cross-over ratio, sequentially carrying out target association on the detection results of each group of images to obtain a target track;
and 6, obtaining a final target detection tracking result which is a target track with the target number larger than or equal to N/2 in the obtained target track, wherein N is the total frame number of the images in the image sequence to be detected.
Further, the luminance of the unified image block is quantized using a linear stretching method, a histogram equalization method, or using statistics of pixels.
Further, a specific method for quantizing the luminance of the unified image block by using the statistics of the pixels is as follows:
calculating the minimum z of a pixel within an image blockminMaximum value zmaxMean μ and standard deviation σ;
if σ <10, the image block is quantized according to equation (1):
x=min((z-zmin)×5,255) (1)
wherein, x represents the quantized pixel value, and z is the original pixel;
otherwise, quantizing the image block according to equation (2):
Figure BDA0003451330160000041
wherein, Tb=min(Ta,zmax),Ta=μ+rσ;
r is the standard deviation gain, the high value r of rup∈[4.5,5.5]Low value r of rdown∈[2.5,3.5](ii) a If S isa/SbsThen r is rupOtherwise r is rdownWherein the pixel number is proportional to the threshold τs∈[0.01,0.03],SaThe number of pixels with pixel values greater than μ +3 σ, S, within the image blockbIs the total number of pixels within the image block.
Further, in step 1, the minimum height h of the M frames of images included in each group of images is calculatedminAnd a minimum width wminUniformly clipping the height and width of the M frame image into hminAnd wmin
Further, the specific method of target association based on the cross-over ratio in step 5 is as follows:
for the image tensor YiWhen i is 1, each detection target is detected
Figure BDA0003451330160000042
All the target tracks are used as one target track to obtain the current target track, and the number T of the current target tracks is equal to K;
when i is more than or equal to 2, setting the total T of the obtained current target tracks, and calculating the last target R of the T-th item target trackt=[xt,yt,ht,wt]TDetected target not associated with the group newly obtained
Figure BDA0003451330160000043
Sequentially carrying out intersection ratio, if the maximum value in the intersection ratio is greater than a threshold value tau, associating the detection target corresponding to the maximum intersection ratio to the t-th item mark track, and recording the associated detection target
Figure BDA0003451330160000044
The association status of (a); sequentially associating the obtained T current target tracks with the newly obtained detection targets which are not associated; all associated target tracks, and all non-associated targets, are taken as the current target track.
Further, the detected object is recorded
Figure BDA0003451330160000051
The method for associating states comprises the following steps: in the target association process, a vector c ═ c containing K Boolean variables is maintained1,...,cK]TDetecting the object
Figure BDA0003451330160000052
Not yet associated, mark ck=0;
Figure BDA0003451330160000053
Has been associated with the rule mark ck=1。
Further, the neural network in step 3 is a deep convolutional neural network.
Further, the deep convolutional neural network adopts fast RCNN, SSD or R3 Det.
Further, in step 5, the detection results of the plurality of groups of images are associated by using the target center point distance.
Has the advantages that:
a. aiming at the problem that detection omission is easily caused when the brightness of a target on an image is low, the method performs adaptive quantization processing on an input image, brightens an excessively dark target and dims an excessively bright target, so that the influence of the brightness on target detection is reduced;
b. aiming at the problems that the detection effect on the weak and small targets is poor and detection omission or false detection is easy to occur when detection is carried out on a single-frame image, the method detects the candidate targets on the synthetic image formed by multiple frames of images, can solve the problem that the weak and small targets are difficult to detect because the size of the targets on the synthetic image is increased, and can improve the detection rate and reduce the false alarm rate because the motion information of the targets is utilized;
c. aiming at the problem that the target is difficult to accurately detect under the complex scene with more broken clouds based on the characteristics of manual design, the invention adopts the convolutional neural network to automatically learn the good characteristics capable of describing the target in the data in a data-driven mode, can adapt to the target detection under the complex scene, and effectively reduces the false alarm rate;
d. aiming at the problems that the existing target association model is high in calculation complexity and difficult to meet the requirement of timeliness, the method utilizes the characteristic that the same target in adjacent synthetic images has large overlap and carries out target association based on cross-over ratio, so that the calculation complexity linearly changes along with the number of detected targets, the calculation efficiency is high, and the requirement of timeliness can be met.
Drawings
FIG. 1 is a flow chart of a prior art method.
FIG. 2 is a flow chart of the method of the present invention.
FIG. 3 shows the group 1 image detection results of step 4 in the example.
FIG. 4 shows the group 2 image detection results of step 4 in the example.
FIG. 5 shows the result of group 3 image detection in step 4 of the example.
FIG. 6 shows the result of group 4 image detection in step 4 of the present embodiment.
FIG. 7 shows the result of group 5 image detection in step 4 of the example.
FIG. 8 shows the result of group 6 image detection in step 4 of the example.
Fig. 9 is a final target detection tracking result in step 5 in the embodiment.
Where a circular box represents a detected object on the current image, a rectangular box represents a detected object on the history image, and an arrow represents an object trajectory.
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
The invention provides a method for detecting and tracking a marine target of a visible light sequence image of a synchronous orbit satellite, which comprises the following steps:
step 1, reading sequence images containing N frames in sequence and grouping; unifying the length and width of the images in the group, and splicing to form an image tensor Yi(ii) a Firstly, grouping N-8 frame images according to adjacent M-3 frames to obtain N-M + 1-6 combined image, in which the ith combined image XiIs composed of the i to i + M-1 frame images. This embodiment groups 8 frames of images into a group of 3 adjacent frames to obtain 6 combined images Xi1, 6, wherein the ith group is combined into an image XiThe image is composed of the i-th to i + 2-th frame images, namely:
1 st combined into image X1Is composed of the 1 st to 3 rd frame images,
2 nd combined image X2Is composed of the 2 nd to 4 th frame images,
and so on.
For each group of images, calculating the minimum height h in M frames of images contained in each group of imagesminAnd width wminAnd M is 3, and the height and the width of the 3-frame image are unified into h by cutting redundant pixelsminAnd wmin
Splicing the 3 frames of images with uniform size along the channel direction to form a size hmin×wminX M image tensor Yi. Tensor Y of imageiImage block tensor y cut to size p × p × M along row and column directionsi. In the examples, hminIs 1600, wminIs 3200. Splicing the 3 frames of images with uniform size along the channel direction to form an image tensor YiAnd the size is 1600 multiplied by 3200 multiplied by 3.
Tensor Y of imageiClipped along the row and column directions to form the image block tensor yiAnd the size is 320 × 320 × 3.
Step 2, the image is processed in a blocking mode, the brightness in the image block is unified, and the image tensor Y is processediBlock-wise division into image block tensors yi
The image is subjected to a blocking process to form image blocks of size p × p. In this embodiment, N is 8, and the image block size p is set to 320.
Performing adaptive quantization processing operation on each image block, calculating threshold value by using statistic of the image block, and calculating to obtain minimum value z of pixels in the image blockminMaximum value zmaxMean μ and standard deviation σ;
if σ <10, the image block is quantized according to equation (1):
x=min((z-zmin)×5,255) (1)
wherein x represents a quantized pixel value;
otherwise, the quantized pixel value is calculated according to the formula (2), and the image block is quantized:
Figure BDA0003451330160000071
wherein, Tb=min(Ta,zmax),Ta=μ+rσ;
Calculating the number S of pixels with pixel value greater than mu +3 sigma in the image blockaTotal number of pixels S within a block of imagesb
If Sa/SbsWhen then r is equal to rupOtherwise r is rdown. Where r is the standard deviation gain. In this embodiment, the relevant parameters of the adaptive quantization processing are set as follows: setting pixel number proportion threshold taus0.02, high value of standard deviation gain rup5, standard deviation gain low value rdown=3。
Each image block is adaptively processed according to the above method.
And 3, constructing a neural network as a target detection model. Setting the size of input layer of neural network to 320X 3, nerveThe input to the network is yi,yiAnd the tensor is formed by splicing the adjacent 3 frames of images along the channel direction. And then training the constructed neural network by using the training sample set to obtain a trained detection model.
Step 4, grouping the image sequences to be detected in the same way as the step 1, and obtaining tensor Y of each group of imagesiImage block tensor y included thereiniInputting the data into a trained detection model; sequentially carrying out tensor Y on each group of imagesiCarrying out target detection to obtain a detection result Ri
Let RiThe total number of the detection targets is K, and the K-th detection target is recorded as
Figure BDA0003451330160000081
Wherein
Figure BDA0003451330160000082
Respectively representing the horizontal and vertical coordinates, the height and the width of the central point of the target, and k belongs to N. Processing the image tensors Y in sequence1To YN-M+1
And 5, performing target association on the detection results of the multiple groups of images based on the intersection ratio.
As shown in fig. 3 to 9, a circular frame indicates a detected object on the current image, a rectangular frame indicates a detected object, and an arrow indicates an object trajectory.
In an embodiment, the cross-over ratio threshold τ is set to 0.1.
For the image tensor Y1Image block tensor y included therein1Input to the detection model fθ(. cndot.) is obtained as a detection result R1. Image detection result R1As shown in fig. 3, the detection results on the current group of images are respectively: detecting an object
Figure BDA0003451330160000083
And
Figure BDA0003451330160000084
the number of the targets detected on the current image is 3, and the 3 targets are respectively pairedOne target track, namely target tracks 1, 2 and 3, is used, so that the current number of target tracks T is 3.
For the image tensor Y2Image block tensor y included therein2Input to the detection model fθ(. cndot.) is obtained as a detection result R2. Detection result R2As shown in fig. 4, 2 targets are detected on the current group of images, which are: detecting an object
Figure BDA0003451330160000085
And
Figure BDA0003451330160000086
therefore, the number K of targets detected on the current image is 2. Wherein the detected object
Figure BDA0003451330160000087
Associated with target trajectory 2; thereby detecting the target
Figure BDA0003451330160000091
Not associated with existing target tracks and therefore will detect a target
Figure BDA0003451330160000092
As a new target track, i.e. target track 4. After the first two groups of images are subjected to target association, 4 target tracks are shared, namely the number of the current target tracks T is 4.
For the image tensor Y3Image block tensor y included therein3Input to the detection model fθ(. cndot.) is obtained as a detection result R3. Detection result R3As shown in fig. 5, 3 targets are detected on the current group of images, which are: detecting an object
Figure BDA0003451330160000093
And
Figure BDA0003451330160000094
therefore, the number K of targets detected on the current image is 3. In which the target is detected
Figure BDA0003451330160000095
And
Figure BDA0003451330160000096
associated with target tracks 2 and 4, respectively, to detect the target
Figure BDA0003451330160000097
Not associated with the existing target track, and therefore will detect the target
Figure BDA0003451330160000098
As a new target track, target track 5. After the first three groups of images are subjected to target association, 5 target tracks are in total, namely T is 5.
For the image tensor Y4Image block tensor y included therein4Input to the detection model fθ(. cndot.) is obtained as a detection result R4. Detection result R4As shown in fig. 6, 2 targets are detected on the current group of images, which are: detecting an object
Figure BDA0003451330160000099
And
Figure BDA00034513301600000910
the number of targets K detected on the current image is therefore 2. In which the target is detected
Figure BDA00034513301600000911
And
Figure BDA00034513301600000912
associated with target tracks 4 and 5, respectively. The target association of the first four groups of images is carried out, and then 5 target tracks are shared, namely T is 5.
For the image tensor Y5Image block tensor y included therein5Input to the detection model fθ(. cndot.) is obtained as a detection result R5. Detection result R5As shown in fig. 7, 3 targets are detected on the current group of images, which are: detecting an object
Figure BDA00034513301600000913
And
Figure BDA00034513301600000914
the number of targets detected on the current image, K, is therefore 3, where the targets are detected
Figure BDA00034513301600000915
And
Figure BDA00034513301600000916
associated with the target tracks 2 and 4, respectively, to detect the target track
Figure BDA00034513301600000917
Not associated with existing target trajectories and therefore will newly detect targets
Figure BDA00034513301600000918
As a new target track, target track 6. There are 6 label tracks after the first five groups of images are subjected to target association, namely T is 6.
For the image tensor Y6Image block tensor y included therein6Input to the detection model fθ(. cndot.) is obtained as a detection result R6. The 6 th group of image detection results are shown in fig. 8, and 2 targets are detected on the current group of images, which are respectively: detecting an object
Figure BDA00034513301600000919
And
Figure BDA00034513301600000920
the number of targets detected on the current image, K, is thus 2, where the targets are detected
Figure BDA00034513301600000921
And
Figure BDA00034513301600000922
associated with target tracks 2 and 4, respectively.After the six groups of images are subjected to target association, 6 target tracks are in total, namely T is 6.
And 5, generating and outputting the target track.
And obtaining a group of target tracks after finishing target detection and association of all images in the sequence, and taking the target tracks with the target number more than or equal to N/2 as a final target detection tracking result and outputting the final target detection tracking result. In this embodiment, N is 8, and therefore only the target tracks containing the target number greater than or equal to 4 are reserved as the final target detection tracking result. The target tracks 1 to 6 contain the number of targets 1, 5, 2, 1, respectively, and thus only the target tracks 2 and 4 remain. The final target detection tracking result is shown in fig. 9. The target tracks 2 and 4 correspond to two ship targets, and the other target tracks are false alarms generated by cloud.
The effect of the scheme of the invention is verified by using 10 groups of actually measured sequences, the average size of the image is 10000 multiplied by 10000, and the detection rate, the false alarm rate and the operation time of the scheme of the invention on each test sequence are shown in table 1. The average detection rate of the scheme of the invention is 92%, the false alarm rate is 30% and the running time is 8 s. The average detection rate of the existing method on the same 10 groups of measured sequences is 70%, the false alarm rate is 40%, and the running time is 125 s. The scheme of the invention is obviously superior to the existing method in the detection rate and the false alarm rate, and proves the effectiveness of the scheme of the invention for detecting the sequence image target by using the composite image generated by the multi-frame image; in timeliness, compared with the existing method based on multi-hypothesis tracking, the method provided by the invention is improved by 15.6 times, and the effectiveness of target association based on intersection and union ratio in the method provided by the invention is shown.
TABLE 1 test results and run times of the protocol of the invention on 10 sets of measured sequences
Figure BDA0003451330160000101
Figure BDA0003451330160000111
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A method for detecting and tracking a marine target of a visible light sequence image of a synchronous orbit satellite is characterized by comprising the following steps:
step 1, reading N frame sequence images in sequence, and grouping according to adjacent M frames as a group to obtain N-M +1 groups of images X1,…,Xi…,XN-M+1Wherein X isiThe system consists of i to i + M-1 frame images; unifying the length and width of the images in the group, and splicing to form an image tensor Yi
Step 2, the image is processed in a blocking mode, the brightness in the image block is unified, and the image tensor Y is processediBlock-wise division into image block tensors yi
Step 3, constructing a neural network as a marine target detection model, wherein the input of the neural network is the image block tensor y of each groupi(ii) a Training the offshore target detection model by using the sample set to obtain a trained offshore target detection model;
step 4, inputting the image sequence to be detected into the trained marine target detection model according to groups to obtain target detection results of each group;
step 5, based on the cross-over ratio, sequentially carrying out target association on the detection results of each group of images to obtain a target track;
and 6, obtaining a final target detection tracking result which is a target track with the target number larger than or equal to N/2 in the obtained target track, wherein N is the total frame number of the images in the image sequence to be detected.
2. The method for detecting and tracking the targets at sea of the visible light sequence images of the geostationary orbit satellite according to claim 1, wherein the brightness of the unified image block is quantified by a linear stretching method, a histogram equalization method or by using statistics of pixels.
3. The method for detecting and tracking the targets at sea of the visible light sequence images of the geostationary orbit satellite as claimed in claim 2, wherein the specific method for quantifying the brightness of the unified image blocks by using the statistics of the pixels is as follows:
calculating the minimum z of a pixel within an image blockminMaximum value zmaxMean μ and standard deviation σ;
if σ <10, the image block is quantized according to equation (1):
x=min((z-zmin)×5,255) (1)
wherein, x represents the quantized pixel value, and z is the original pixel;
otherwise, quantizing the image block according to equation (2):
Figure FDA0003451330150000021
wherein, Tb=min(Ta,zmax),Ta=μ+rσ;
r is the standard deviation gain, the high value r of rup∈[4.5,5.5]Low value r of rdown∈[2.5,3.5](ii) a If S isa/SbsThen r is rupOtherwise r is rdownWherein the pixel number is proportional to the threshold τs∈[0.01,0.03],SaThe number of pixels with pixel values greater than μ +3 σ, S, within the image blockbIs the total number of pixels within the image block.
4. The method for detecting and tracking the targets at sea of the visible light sequence images of the geostationary orbit satellite according to claim 1, wherein in the step 1, the minimum height h of the M frames of images included in each group of images is calculatedminAnd a minimum width wminUniformly clipping the height and width of the M frame image into hminAnd wmin
5. The method for detecting and tracking the targets at sea of the visible light sequence images of the geostationary orbit satellite according to claim 1, wherein the step 5 of target association based on the cross-over ratio comprises the following specific steps:
for the image tensor YiWhen i is 1, each detection target is detected
Figure FDA0003451330150000022
All the target tracks are used as one target track to obtain the current target track, and the number T of the current target tracks is equal to K;
when i is more than or equal to 2, setting the total T of the obtained current target tracks, and calculating the last target R of the target track of the T-th itemt=[xt,yt,ht,wt]TNot associated with newly-obtained detection target of this group
Figure FDA0003451330150000023
Sequentially carrying out intersection ratio, if the maximum value in the intersection ratio is greater than a threshold value tau, associating the detection target corresponding to the maximum intersection ratio to the t-th item mark track, and recording the associated detection target
Figure FDA0003451330150000024
The association status of (a); sequentially associating the obtained T current target tracks with the newly obtained detection targets which are not associated; all associated target tracks, and all non-associated targets, are taken as the current target track.
6. The method for detecting and tracking the targets at sea of the visible light sequence images of the geostationary orbit satellite according to claim 5, wherein the detected targets are recorded
Figure FDA0003451330150000031
The method for associating states comprises the following steps: in the target association process, a vector c containing K Boolean variables is maintained=[c1,...,cK]TDetecting the object
Figure FDA0003451330150000032
Not yet associated, marker ck=0;
Figure FDA0003451330150000033
Has been associated with the rule mark ck=1。
7. The method for detecting and tracking the targets at sea of the visible light sequence images of the geostationary orbit satellite according to claim 1, wherein the neural network in the step 3 is a deep convolutional neural network.
8. The method for detecting and tracking targets at sea of visible light sequence images of geostationary orbit satellite according to claim 7, wherein the deep convolutional neural network employs fast RCNN, SSD or R3 Det.
CN202111666781.4A 2021-12-31 2021-12-31 Method for detecting and tracking marine target of visible light sequence image of synchronous orbit satellite Pending CN114494342A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111666781.4A CN114494342A (en) 2021-12-31 2021-12-31 Method for detecting and tracking marine target of visible light sequence image of synchronous orbit satellite

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111666781.4A CN114494342A (en) 2021-12-31 2021-12-31 Method for detecting and tracking marine target of visible light sequence image of synchronous orbit satellite

Publications (1)

Publication Number Publication Date
CN114494342A true CN114494342A (en) 2022-05-13

Family

ID=81507871

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111666781.4A Pending CN114494342A (en) 2021-12-31 2021-12-31 Method for detecting and tracking marine target of visible light sequence image of synchronous orbit satellite

Country Status (1)

Country Link
CN (1) CN114494342A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019187A (en) * 2022-08-09 2022-09-06 中国科学院空天信息创新研究院 Detection method, device, equipment and medium for SAR image ship target

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019187A (en) * 2022-08-09 2022-09-06 中国科学院空天信息创新研究院 Detection method, device, equipment and medium for SAR image ship target
CN115019187B (en) * 2022-08-09 2022-11-22 中国科学院空天信息创新研究院 Detection method, device, equipment and medium for SAR image ship target

Similar Documents

Publication Publication Date Title
CN110232350B (en) Real-time water surface multi-moving-object detection and tracking method based on online learning
CN107818571B (en) Ship automatic tracking method and system based on deep learning network and average drifting
CN113034548B (en) Multi-target tracking method and system suitable for embedded terminal
CN107016357B (en) Video pedestrian detection method based on time domain convolutional neural network
Rakibe et al. Background subtraction algorithm based human motion detection
KR20200007084A (en) Ship detection method and system based on multi-dimensional features of scene
CN113052876B (en) Video relay tracking method and system based on deep learning
CN108197604A (en) Fast face positioning and tracing method based on embedded device
CN112669350A (en) Adaptive feature fusion intelligent substation human body target tracking method
CN111582126B (en) Pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion
CN110555868A (en) method for detecting small moving target under complex ground background
Modasshir et al. Coral identification and counting with an autonomous underwater vehicle
Petraglia et al. Pipeline tracking and event classification for an automatic inspection vision system
CN113920436A (en) Remote sensing image marine vessel recognition system and method based on improved YOLOv4 algorithm
CN112927233A (en) Marine laser radar and video combined target capturing method
CN110751077A (en) Optical remote sensing picture ship detection method based on component matching and distance constraint
CN108717539A (en) A kind of small size Ship Detection
CN114494342A (en) Method for detecting and tracking marine target of visible light sequence image of synchronous orbit satellite
CN114419444A (en) Lightweight high-resolution bird group identification method based on deep learning network
Saad et al. StereoYolo+ DeepSORT: a framework to track fish from underwater stereo camera in situ
CN113255549A (en) Intelligent recognition method and system for pennisseum hunting behavior state
CN117475353A (en) Video-based abnormal smoke identification method and system
CN117423157A (en) Mine abnormal video action understanding method combining migration learning and regional invasion
Yang et al. Knowledge Distillation for Feature Extraction in Underwater VSLAM
CN116343078A (en) Target tracking method, system and equipment based on video SAR

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination