CN111695524A - Remote sensing image sea surface ship detection method - Google Patents

Remote sensing image sea surface ship detection method Download PDF

Info

Publication number
CN111695524A
CN111695524A CN202010542021.1A CN202010542021A CN111695524A CN 111695524 A CN111695524 A CN 111695524A CN 202010542021 A CN202010542021 A CN 202010542021A CN 111695524 A CN111695524 A CN 111695524A
Authority
CN
China
Prior art keywords
image
remote sensing
saliency
sea surface
surface ship
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010542021.1A
Other languages
Chinese (zh)
Inventor
吴诗婳
戴大伟
李亚钊
于子桓
李彭伟
冯燕来
阚凌志
郭婉
陈娜
陆君之
赵祥智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 28 Research Institute
Original Assignee
CETC 28 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 28 Research Institute filed Critical CETC 28 Research Institute
Priority to CN202010542021.1A priority Critical patent/CN111695524A/en
Publication of CN111695524A publication Critical patent/CN111695524A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Health & Medical Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a remote sensing image sea surface ship detection method, which comprises the following steps: 1. preprocessing the remote sensing image to be detected, filtering image noise and improving the image visual effect; 2. performing multi-feature saliency analysis on the preprocessed image by using a GBVS (global GBVS) model to obtain a fusion saliency map; 3. and (4) according to the significance analysis result as an initial condition of an improved Chan-Vese model, completing the sea surface ship segmentation of the remote sensing image. 4. And selecting the ROI as a minimum circumscribed rectangular region according to a TBR criterion, and finally realizing sea surface ship detection. The invention effectively realizes the sea surface ship detection of remote sensing images under complex backgrounds.

Description

Remote sensing image sea surface ship detection method
Technical Field
The invention relates to the field of remote sensing image sea surface ship detection, in particular to a remote sensing image sea surface ship detection method.
Background
The ship is a key target on the sea, and ship detection is taken as an important research direction in the field of remote sensing image processing, and has great value in the fields of marine traffic supervision, marine space planning and the like. The remote sensing image is used for automatically detecting the sea surface ship target, so that the defect that the traditional method depends on manual interpretation can be overcome, the efficiency is greatly improved, and the cost and the labor force are reduced. Most of the existing remote sensing image sea surface ship detection methods are based on a threshold segmentation method, the algorithm is simple and quick, but in practical application, due to interference of factors such as a remote sensing image imaging mechanism, climate, illumination and the like, the phenomena of haze shielding, uneven illumination, similarity of ship and sea surface gray levels and the like often exist in the image. The traditional sea surface ship detection method easily causes missed detection and false alarm. Therefore, how to effectively extract the sea surface ship target in the remote sensing image becomes a research hotspot in the field.
Disclosure of Invention
The invention discloses a remote sensing image sea surface ship detection method based on significance and an improved Chan-Vese model, aiming at the problems in the field of existing remote sensing image sea surface ship detection. Aiming at the problems of large data volume, noise interference, unsatisfactory contrast and the like of a remote sensing image, firstly, graying, median filtering and linear stretching are carried out on the remote sensing image, noise and other useless information are filtered, and meanwhile, an interested area is emphasized; then, performing multi-feature saliency analysis on the preprocessed image by adopting a GBVS (GBVS) model to generate a comprehensive saliency map; in order to avoid the defect that the Chan-Vese model is sensitive to initial conditions, the significant analysis result is used as an initial region for improving the Chan-Vese model, and sea surface ship segmentation is realized; and finally, comprehensively considering the number of pixels of the target and the background area, and finally completing the detection of the target of the sea surface ship based on a minimum circumscribed rectangle method.
The technical scheme is as follows: the invention discloses a remote sensing image sea surface ship detection method based on significance and an improved Chan-Vese model, which comprises the following steps:
step 1, preprocessing a remote sensing image to be detected;
step 2, simplifying a Visual salient GBVS (GBVS) model of the image, carrying out salient analysis on the preprocessed remote sensing image by using the Visual salient GBVS model of the simplified image, and fusing to generate a comprehensive salient image;
step 3, taking the significant analysis result as an initial iteration condition of the Chan-Vese model to obtain a sea surface ship segmentation result of the remote sensing image;
and 4, finally realizing sea surface ship detection according to a TBR (Target-Background Rate, the ratio of the Target pixel number of the Target area to the Background pixel number).
The step 1 comprises the following steps:
step 1-1, graying an image: reducing the dimension of the remote sensing image to be detected by adopting image graying operation, and eliminating redundant information in the image;
step 1-2, image filtering and denoising: denoising the grayed image by adopting median filtering;
step 1-3, image linear enhancement: performing linear stretching processing on the denoised image by using the following formula, and redistributing the gray level range of the image:
Figure BDA0002539225620000021
in the formula, geG is the image gray level after de-noising, g is the image gray level after image linear enhancementmax、gminRespectively the maximum value and the minimum value of the image gray level after denoising.
The step 1 comprises the following steps:
step 2-1, simplifying a visual salient GBVS model of the graph, only extracting a sub-image feature map of brightness and direction, performing down-sampling on an image to be detected, performing image filtering by using a Gaussian pyramid low-pass filter to obtain 1 group of multi-scale brightness feature maps, and performing image filtering by using a Gabor pyramid filter group to obtain 4 groups of multi-scale direction feature maps with azimuth angles of 0 degrees, 45 degrees, 90 degrees and 135 degrees (the traditional GBVS model adopts the algorithm: Jonathan H, Christof K, Pietrop. graph-based visual significance [ C ]. Advances in Neural Information processing systems, Vancouver,2006,19: 545-552.);
step 2-2, generating a sub-saliency map: firstly, constructing a Markov chain of an image to be detected by a visual saliency GBVS model of the simplified graph, and realizing generation of a sub-saliency graph by solving the Markov chain in a balanced distribution manner;
and 2-3, fusing the generated sub-saliency maps to generate a comprehensive saliency map.
Step 2-2 comprises: the image feature map (i.e., the multi-scale direction feature maps with the 4 sets of azimuth angles 0 °, 45 °, 90 °, and 135 ° obtained above) is set to MFNow, feature map MFAny one pixel point is regarded as a node, and the node is connected with an adjacent node to form a directed graph GANode MF(i, j) and node MFThe degree of difference d ((i, j), (p, q)) between (p, q) is defined as:
Figure BDA0002539225620000031
i, j are divided intoRespectively represent node MFThe abscissa and ordinate of (i, j), p, q respectively represent the node MFThe abscissa and ordinate of (p, q);
the edge connecting nodes (i, j) and (p, q) sets a weight w1((i, j), (p, q)) is represented by the following formula:
w1((i,j),(p,q))=d((i,j),(p,q))·F(i-p,j-q),
wherein F (i-p, j-q) ═ exp [ - (i-p)2-(j-q)2)/2σ2]Sigma is an adjustable parameter and generally takes a value of 0-1;
because the weights of the reverse edges are the same, the weights of the edges starting from the same node are normalized, the node is regarded as the state, the weight of the edge is regarded as the transition probability, and the directed graph G is subjected to the normalizationADefining a Markov chain, using the equilibrium distribution state diagram as the feature diagram MFCorresponding saliency map MA(for a balanced distribution state diagram, the time spent at each point is represented by the balanced state of the Markov chain.A node has little similarity to surrounding nodes, and then much time will be gathered at this node, and the dwell time can judge the significance of a region.A detailed solving process refers to a Markov chain balanced distribution state solution);
during the process of normalizing the saliency map, a Markov chain G is constructed againNAnd let the weight w of the edge connecting the two nodes (i, j) and (p, q)2((i, j), (p, q)) is:
w2((i,j),(p,q))=MA(p,q)·F(i-p,j-q),
calculate G againNThe steady state distribution diagram (the specific solving process refers to Markov chain steady state distribution solving) is obtained to obtain a normalized saliency map MN
The step 2-3 comprises the following steps:
fusing the generated sub-saliency maps according to the following formula to respectively obtain brightness saliency maps SIAnd direction saliency map SO
Figure BDA0002539225620000041
Figure BDA0002539225620000042
In the formula (I), the compound is shown in the specification,
Figure BDA0002539225620000043
representing a luminance sub-saliency map of scale k,
Figure BDA0002539225620000044
representing a direction sub-saliency map with a dimension K and an azimuth angle theta, wherein K represents the total number of dimensions;
Figure BDA0002539225620000045
represents a cross-scale addition;
the obtained brightness saliency map S is subjected toIAnd direction saliency map SOAnd performing secondary fusion to obtain a comprehensive saliency map S of the remote sensing image to be detectedM
Figure BDA0002539225620000046
The step 3 comprises the following steps: adopting a weighted Chan-Vese model and introducing an adaptive weight1And2calculating a fitting center, and synthesizing the saliency map SMAs input to the weighted Chan-Vese model, and calculated by modifying the partial differential equation according to the following equation, minimizing the energy function:
Figure BDA0002539225620000047
wherein the content of the first and second substances,
Figure BDA0002539225620000048
representing a level set Function, selecting a Signed Distance Function (SDF) as the level set Function, the level set Function becomes:
Figure BDA0002539225620000049
wherein d is a symbolic distance function representing the distance of a point (x, y) in the high-dimensional space to the zero level set,
zero level set function
Figure BDA00025392256200000410
As shown in the following formula:
Figure BDA00025392256200000411
wherein t represents iteration time, ▽ is a differential operator symbol, g (x, y) is a preprocessed remote sensing image to be detected, and the boundary contour is the comprehensive saliency map S obtained in the step 2-3MAs an initial contour of the improved CV model;
in the formula, mu, v, lambda1、λ2Is a constant number, C1、C2Respectively for introducing adaptive weight1And2the target and background fitting centers of (a), are defined as follows:
Figure BDA0002539225620000051
wherein the content of the first and second substances,
Figure BDA0002539225620000052
representing an ideal step function, when performing a numerical operation, the following equation can be used to participate in the operation:
Figure BDA0002539225620000053
intermediate function
Figure BDA0002539225620000054
Represents a small positive number tending to 0;
wherein
Figure BDA0002539225620000055
Realizing self-adaptive adjustment through iteration;
and obtaining an optimal contour through level set updating and boundary evolution, thereby realizing sea surface ship target segmentation.
Step 4 comprises the following steps: and according to the TBR criterion, selecting an interested area with the maximum ratio of the target pixel number of the sea surface ship target area to the background pixel number of the ocean area as the minimum circumscribed rectangular area of the sea surface ship target, realizing the detection of the sea surface ships and finally finishing the detection of the sea surface ships.
Has the advantages that: compared with the prior art, the remote sensing image sea surface ship detection method based on the significance and the improved Chan-Vese model has the advantages that:
(1) the remote sensing image is preprocessed by comprehensively utilizing image graying, median filtering and linear enhancing means in consideration of large data quantity, noise interference and unsatisfactory readability of the remote sensing image, so that high-quality data guarantee is provided for subsequent sea surface ship detection;
(2) the visual saliency mechanism theory is inspired by biological visual mechanism, has certain ability of capturing interested targets from complex background, introduces the visual saliency mechanism into the field of remote sensing image sea surface ship detection, is expected to highlight the salient ship targets in the ocean scene, and further improves the sea surface ship detection effect. The GBVS model can effectively highlight the concerned position in the image, and has obvious advantages especially for the remote sensing image with complex background and clear target structure. In view of this, a simplified GBVS model is adopted to describe a salient region in the preprocessed remote sensing image, and a comprehensive salient image is generated after fusion;
(3) and the significant analysis result is used as the input of the weighted Chan-Vese model, so that the problems of unknown initial conditions and long convergence time of the traditional Chan-Vese model are solved, and the speed and the intelligent degree of the method are improved. Meanwhile, a self-adaptive weight weighted average strategy is constructed to replace the traditional arithmetic average calculation of the contribution value of the pixel point to the fitting center, and the difference is fully considered, so that the segmentation result is more accurate.
(4) And selecting the region with the largest ratio of the number of the ship target to the number of the ocean background pixels as a minimum circumscribed rectangular region according to a TBR criterion, thereby realizing the sea surface ship detection of the remote sensing image.
Drawings
The foregoing and/or other advantages of the invention will become further apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a remote sensing image to be detected of the present invention;
FIG. 3 is a graying result of the remote sensing image to be detected according to the present invention;
FIG. 4 is a filtering and denoising result of the remote sensing image to be detected according to the invention;
FIG. 5 is a linear enhancement result of the remote sensing image to be detected according to the present invention;
FIG. 6 is a result of significant analysis of a remote sensing image to be detected according to the present invention;
FIG. 7 is a remote sensing image ship segmentation result of the present invention;
FIG. 8 is a ship detection result of a remote sensing image of the present invention;
FIG. 9 shows the superposition effect of the remote sensing image ship detection result and the original image.
Detailed Description
The invention provides a remote sensing image sea surface ship detection method based on significance and an improved Chan-Vese model. The method comprises the steps of firstly, preprocessing a remote sensing image to be detected, filtering image noise and improving image visual effect; then, performing multi-feature saliency analysis on the preprocessed image by using a simplified GBVS model to obtain a fusion saliency map; then, according to the significance analysis result, the significance analysis result is used as an initial condition of an improved Chan-Vese model, and sea surface ship segmentation of the remote sensing image is completed; and finally, extracting the length and the width of the ship target by adopting a minimum circumscribed rectangle method, and properly selecting the ROI as a minimum circumscribed rectangle region by utilizing the maximum ratio of the target pixel number and the background pixel number of the target domain according to a TBR (tunnel boring machine) criterion so as to realize sea surface ship detection.
The invention is further elucidated with reference to the drawings and the detailed description.
The flow diagram for implementing the invention is shown in fig. 1, and the method comprises the following specific implementation steps:
step 1: and carrying out preprocessing operation on the remote sensing image to be detected.
(1) And (5) graying the image. In view of the fact that the data volume of the remote sensing image is large, and the color component does not contribute to ship detection, the remote sensing image to be detected (figure 2) is subjected to dimension reduction by image graying operation, redundant information in the image is removed, the running time of a subsequent detection algorithm is reduced, and the grayed image is shown in figure 3.
(2) And (5) filtering and denoising the image. In order to reduce noise brought in the imaging and data transmission processes of the sensor and interference information caused by factors such as imaging and climate, a good nonlinear filter-median filter is adopted to perform denoising processing on the grayed image, the noise is filtered, and meanwhile, detail information of the image is retained, and the denoised image is shown in fig. 4.
(3) The image is linearly enhanced. In view of the fact that the gray level of the remote sensing image is usually concentrated in a certain gray level interval, the contrast is not satisfactory, and the subsequent detection is not facilitated. In order to solve the problem, the following formula is used to perform linear stretching processing on the denoised image, the gray level range of the image is redistributed, the readability of the image is improved, and the enhanced image is shown in fig. 5.
Figure BDA0002539225620000071
In the formula, geFor enhanced image gray scale, g for de-noised image gray scale, gmax、gminRespectively the maximum value and the minimum value of the image gray level after denoising.
Step 2: the GBVS model is used to describe salient regions in the pre-processed image. The method mainly comprises the operations of multi-feature extraction, sub-saliency map generation, saliency map fusion and the like of the image.
(1) And (4) extracting multiple features. Since the image to be detected is subjected to graying operation in the step 1, the image to be detected does not have color dimensionality any more, the GBVS model is simplified, and only the sub-image feature maps of brightness and direction are extracted. In order to improve the operation efficiency of the algorithm, the image to be detected is subjected to down-sampling, a Gaussian pyramid low-pass filter is used for image filtering to obtain 1 group of multi-scale brightness characteristic diagrams, and meanwhile, a Gabor pyramid filter group is used for image filtering to obtain 4 groups of multi-scale direction characteristic diagrams with azimuth angles (0 degrees, 45 degrees, 90 degrees and 135 degrees).
(2) And generating a sub-saliency map. The GBVS model firstly constructs a Markov chain of an image to be detected, realizes generation of a sub-saliency map by solving the equilibrium distribution of the Markov chain, adopts a chain structure, simulates the working principle of a biological visual neural network, and specifically comprises the following steps:
suppose the image feature map is MFNow, feature map MFAny one pixel point is regarded as a node, and the node is connected with an adjacent node to form a directed graph GA. Node MF(i, j) and node MFThe degree of difference between (p, q) is defined as:
Figure BDA0002539225620000081
the edge setting weight value connecting the nodes (i, j) and (p, q) is shown as the following formula.
w1((i,j),(p,q))=d((i,j),(p,q))·F(i-p,j-q)
Wherein F (i-p, j-q) ═ exp [ - (i-p)2-(j-q)2)/2σ2]And sigma is an adjustable parameter.
Since the weights of the reverse edges are the same, the nodes can be regarded as states and the weights of the edges can be regarded as transition probabilities by normalizing the weights of the edges starting from the same node, and the weights of the edges are regarded as transition probabilities in the directed graph GAA markov chain is defined above. The balanced distribution of the Markov chain reflects a time slot of an infinite length randomly walking at each node, and the balanced distribution value at the nodes is naturally higher due to higher probability of passing through the nodes with high dissimilarity, so that the balanced distribution state diagram is taken as a corresponding saliency map M of the feature diagramA
During the process of normalizing the saliency map, a Markov chain G is constructed againNAnd let the weight of the edge connecting the two nodes (i, j) and (p, q) be:
w2((i,j),(p,q))=MA(p,q)·F(i-p,j-q)
calculate G againNObtaining a normalized saliency map MN
(3) And fusing the saliency maps. Fusing the generated sub-saliency maps according to the following strategies to respectively obtain brightness saliency maps SIAnd direction saliency map SO
Figure BDA0002539225620000082
Figure BDA0002539225620000083
In the formula (I), the compound is shown in the specification,
Figure BDA0002539225620000084
representing a luminance sub-saliency map of scale k,
Figure BDA0002539225620000085
the direction sub-saliency map with the dimension K and the azimuth angle theta is represented, wherein K represents the total number of the dimensions, and the example is set to be 2;
Figure BDA0002539225620000086
representing a cross-scale addition (to the same size by interpolation).
The resulting luminance saliency map S is then scaled according to the following fusion strategyIAnd direction saliency map SOAnd performing secondary fusion to obtain a comprehensive saliency map S of the remote sensing image to be detectedMAs shown in fig. 6.
Figure BDA0002539225620000091
And step 3: in view of the conventional Chan-Vese modelCalculating a fitting center by adopting arithmetic mean, not considering difference of contribution values of any pixel point to the fitting center to cause inaccurate segmentation result, adopting a weighted Chan-Vese model, and introducing a self-adaptive weight1And2and calculating a fitting center to improve the segmentation effect. The comprehensive saliency map S obtained according to step 2MThe optimal contour is obtained through level set updating and boundary evolution by using the optimal contour as the input of a weighted Chan-Vese model and calculating according to the following formula improved partial differential equation, so that the sea surface ship target segmentation is realized, and the segmentation result is shown in figure 7.
Figure BDA0002539225620000092
In the formula, mu, v, lambda1、λ2As a constant, in this case set λ1=λ2=1、μ=1、v=0,C1、C2Respectively for introducing adaptive weight1And2the target and background fitting centers of (a), are defined as follows:
Figure BDA0002539225620000093
wherein
Figure BDA0002539225620000094
Adaptive adjustment is achieved by iteration.
And 4, step 4: in order to better represent the length, width and orientation of the sea surface ship target to be detected, according to the TBR criterion, the ROI with the maximum ratio of the number of target pixels in the sea surface ship target domain to the number of background pixels in the ocean area is selected to serve as the minimum circumscribed rectangular area of the sea surface ship target, so that the sea surface ship detection is realized, and finally the sea surface ship detection is completed (as shown in fig. 8), and fig. 9 shows the superposition effect of the sea surface ship detection result and the original image.
The invention provides a method for detecting a remote sensing image sea surface ship, which has a plurality of methods and ways for implementing the technical scheme, and the above description is only a preferred embodiment of the invention, and it should be noted that, for those skilled in the art, a plurality of improvements and decorations can be made without departing from the principle of the invention, and these improvements and decorations should also be regarded as the protection scope of the invention. All the components not specified in the present embodiment can be realized by the prior art.

Claims (7)

1. A remote sensing image sea surface ship detection method is characterized by comprising the following steps:
step 1, preprocessing a remote sensing image to be detected;
step 2, simplifying the visual saliency GBVS model of the image, performing saliency analysis on the preprocessed remote sensing image by using the visual saliency GBVS model of the simplified image, and fusing to generate a comprehensive saliency image;
step 3, taking the significant analysis result as an initial iteration condition of the Chan-Vese model to obtain a sea surface ship segmentation result of the remote sensing image;
and 4, finally realizing sea surface ship detection according to the TBR criterion.
2. The method of claim 1, wherein step 1 comprises:
step 1-1, graying an image: reducing the dimension of the remote sensing image to be detected by adopting image graying operation, and eliminating redundant information in the image;
step 1-2, image filtering and denoising: denoising the grayed image by adopting median filtering;
step 1-3, image linear enhancement: performing linear stretching processing on the denoised image by using the following formula, and redistributing the gray level range of the image:
Figure FDA0002539225610000011
in the formula, geG is the image gray level after de-noising, g is the image gray level after image linear enhancementmax、gminRespectively the maximum value and the minimum value of the image gray level after denoising.
3. The method of claim 2, wherein step 1 comprises:
step 2-1, simplifying a visual saliency GBVS model of the graph, only extracting a sub-image feature graph of brightness and direction, performing down-sampling on an image to be detected, performing image filtering by using a Gaussian pyramid low-pass filter to obtain 1 group of multi-scale brightness feature graphs, and performing image filtering by using a Gabor pyramid filter group to obtain 4 groups of multi-scale direction feature graphs with 0 degree, 45 degrees, 90 degrees and 135 degrees of azimuth angles;
step 2-2, generating a sub-saliency map: firstly, constructing a Markov chain of an image to be detected by a visual saliency GBVS model of the simplified graph, and realizing generation of a sub-saliency graph by solving the Markov chain in a balanced distribution manner;
and 2-3, fusing the generated sub-saliency maps to generate a comprehensive saliency map.
4. The method of claim 3, wherein step 2-2 comprises: setting the characteristic diagram as MFNow, feature map MFAny one pixel point is regarded as a node, and the node is connected with an adjacent node to form a directed graph GANode MF(i, j) and node MFThe degree of difference d ((i, j), (p, q)) between (p, q) is defined as:
Figure FDA0002539225610000021
i, j respectively represent node MFThe abscissa and ordinate of (i, j), p, q respectively represent the node MFThe abscissa and ordinate of (p, q);
the edge connecting nodes (i, j) and (p, q) sets a weight w1((i, j), (p, q)) is represented by the following formula:
w1((i,j),(p,q))=d((i,j),(p,q))·F(i-p,j-q),
wherein F (i-p, j-q) ═ exp [ - (i-p)2-(j-q)2)/2σ2]σ is an adjustable parameter;
because the weights of the reverse edges are the same, the weights of the edges starting from the same node are normalized, the node is regarded as the state, the weight of the edge is regarded as the transition probability, and the directed graph G is subjected to the normalizationADefining a Markov chain, using the equilibrium distribution state diagram as the feature diagram MFCorresponding saliency map MA
During the process of normalizing the saliency map, a Markov chain G is constructed againNAnd let the weight w of the edge connecting the two nodes (i, j) and (p, q)2((i, j), (p, q)) is:
w2((i,j),(p,q))=MA(p,q)·F(i-p,j-q),
calculate G againNObtaining a normalized saliency map MN
5. The method of claim 4, wherein steps 2-3 comprise:
fusing the generated sub-saliency maps according to the following formula to respectively obtain brightness saliency maps SIAnd direction saliency map SO
Figure FDA0002539225610000022
Figure FDA0002539225610000023
In the formula (I), the compound is shown in the specification,
Figure FDA0002539225610000024
representing a luminance sub-saliency map of scale k,
Figure FDA0002539225610000025
representing a direction sub-saliency map with a dimension K and an azimuth angle theta, wherein K represents the total number of dimensions;
Figure FDA0002539225610000031
represents a cross-scale addition;
the obtained brightness saliency map S is subjected toIAnd direction saliency map SOAnd performing secondary fusion to obtain a comprehensive saliency map S of the remote sensing image to be detectedM
Figure FDA0002539225610000032
6. The method of claim 5, wherein step 3 comprises: adopting a weighted Chan-Vese model and introducing an adaptive weight1And2calculating a fitting center, and synthesizing the saliency map SMAs input to the weighted Chan-Vese model, and calculated by modifying the partial differential equation according to the following equation, minimizing the energy function:
Figure FDA0002539225610000033
wherein the content of the first and second substances,
Figure FDA0002539225610000034
representing the level set function, selecting the symbol distance function as the level set function, the level set function becomes:
Figure FDA0002539225610000035
wherein d is a symbolic distance function representing the distance from a point (x, y) to a zero level set in a high-dimensional space;
zero level set function
Figure FDA0002539225610000036
As shown in the following formula:
Figure FDA0002539225610000037
where, t represents the iteration time,
Figure FDA0002539225610000038
is a differential operator symbol, g (x, y) is a preprocessed remote sensing image to be detected, and the boundary contour is a comprehensive saliency map S obtained in the step 2-3MAs an initial contour of the improved CV model;
μ、v、λ1、λ2is a constant number, C1、C2Respectively for introducing adaptive weight1And2the target and background fitting centers of (a), are defined as follows:
Figure FDA0002539225610000039
wherein the content of the first and second substances,
Figure FDA00025392256100000310
representing an ideal step function, which participates in the numerical operation by the following equation:
Figure FDA0002539225610000041
intermediate function
Figure FDA0002539225610000042
Represents a positive number tending to 0;
Figure FDA0002539225610000043
realizing self-adaptive adjustment through iteration;
and obtaining an optimal contour through level set updating and boundary evolution, thereby realizing sea surface ship target segmentation.
7. The method of claim 6, wherein step 4 comprises: and according to the TBR criterion, selecting an interested area with the maximum ratio of the target pixel number of the sea surface ship target area to the background pixel number of the ocean area as the minimum circumscribed rectangular area of the sea surface ship target, realizing the detection of the sea surface ships and finally finishing the detection of the sea surface ships.
CN202010542021.1A 2020-06-15 2020-06-15 Remote sensing image sea surface ship detection method Pending CN111695524A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010542021.1A CN111695524A (en) 2020-06-15 2020-06-15 Remote sensing image sea surface ship detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010542021.1A CN111695524A (en) 2020-06-15 2020-06-15 Remote sensing image sea surface ship detection method

Publications (1)

Publication Number Publication Date
CN111695524A true CN111695524A (en) 2020-09-22

Family

ID=72480938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010542021.1A Pending CN111695524A (en) 2020-06-15 2020-06-15 Remote sensing image sea surface ship detection method

Country Status (1)

Country Link
CN (1) CN111695524A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109427055A (en) * 2017-09-04 2019-03-05 长春长光精密仪器集团有限公司 The remote sensing images surface vessel detection method of view-based access control model attention mechanism and comentropy

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109427055A (en) * 2017-09-04 2019-03-05 长春长光精密仪器集团有限公司 The remote sensing images surface vessel detection method of view-based access control model attention mechanism and comentropy

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
叶秋果等: "基于视觉显著性的高分辨率遥感影像舰船检测", 《海洋测绘》 *
吴诗婳: "遥感图像预处理与分析方法研究", 《中国优秀硕士学位论文全文数据库(硕士)信息科技辑》 *

Similar Documents

Publication Publication Date Title
CN107145874B (en) Ship target detection and identification method in complex background SAR image
CN111681197B (en) Remote sensing image unsupervised change detection method based on Siamese network structure
CN108121991B (en) Deep learning ship target detection method based on edge candidate region extraction
CN110334762B (en) Feature matching method based on quad tree combined with ORB and SIFT
CN111209952A (en) Underwater target detection method based on improved SSD and transfer learning
CN108399625B (en) SAR image orientation generation method based on depth convolution generation countermeasure network
CN111753682B (en) Hoisting area dynamic monitoring method based on target detection algorithm
US9569699B2 (en) System and method for synthesizing portrait sketch from a photo
CN110569782A (en) Target detection method based on deep learning
CN110084302B (en) Crack detection method based on remote sensing image
CN113888461A (en) Method, system and equipment for detecting defects of hardware parts based on deep learning
CN110991547A (en) Image significance detection method based on multi-feature optimal fusion
WO2018000252A1 (en) Oceanic background modelling and restraining method and system for high-resolution remote sensing oceanic image
CN113111878B (en) Infrared weak and small target detection method under complex background
CN113052872B (en) Underwater moving object tracking method based on sonar image
CN110633727A (en) Deep neural network ship target fine-grained identification method based on selective search
CN113822352A (en) Infrared dim target detection method based on multi-feature fusion
CN114764801A (en) Weak and small ship target fusion detection method and device based on multi-vision significant features
CN111640138A (en) Target tracking method, device, equipment and storage medium
CN115439497A (en) Infrared image ship target rapid identification method based on improved HOU model
CN102509308A (en) Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection
CN113807237A (en) Training of in vivo detection model, in vivo detection method, computer device, and medium
CN113421210A (en) Surface point cloud reconstruction method based on binocular stereo vision
CN113627481A (en) Multi-model combined unmanned aerial vehicle garbage classification method for smart gardens
CN117079097A (en) Sea surface target identification method based on visual saliency

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: No.1 Lingshan South Road, Qixia District, Nanjing, Jiangsu Province, 210000

Applicant after: THE 28TH RESEARCH INSTITUTE OF CHINA ELECTRONICS TECHNOLOGY Group Corp.

Address before: 210007 No. 1 East Street, alfalfa garden, Jiangsu, Nanjing

Applicant before: THE 28TH RESEARCH INSTITUTE OF CHINA ELECTRONICS TECHNOLOGY Group Corp.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200922