CN117538881B - Sonar water imaging beam forming method, system, equipment and medium - Google Patents

Sonar water imaging beam forming method, system, equipment and medium Download PDF

Info

Publication number
CN117538881B
CN117538881B CN202410032299.2A CN202410032299A CN117538881B CN 117538881 B CN117538881 B CN 117538881B CN 202410032299 A CN202410032299 A CN 202410032299A CN 117538881 B CN117538881 B CN 117538881B
Authority
CN
China
Prior art keywords
point cloud
area
dimensional
determining
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410032299.2A
Other languages
Chinese (zh)
Other versions
CN117538881A (en
Inventor
金丽玲
孙锋
范勇刚
王砚梅
何春良
王源
李永恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haiying Deep Sea Technology Co ltd
Original Assignee
Haiying Deep Sea Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haiying Deep Sea Technology Co ltd filed Critical Haiying Deep Sea Technology Co ltd
Priority to CN202410032299.2A priority Critical patent/CN117538881B/en
Publication of CN117538881A publication Critical patent/CN117538881A/en
Application granted granted Critical
Publication of CN117538881B publication Critical patent/CN117538881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/539Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention provides a sonar water body imaging wave beam forming method, a system, equipment and a medium, which relate to the technical field of sonar and comprise the steps that a sonar system transmits sound wave signals to a target area through a transmitter, receives reflected signals through a receiver and determines multi-beam water body images corresponding to the reflected signals; dividing the multi-beam water body image into a mixed area and a background area, and determining an interested area where the target object is located through a target detection algorithm combining view conversion and image interpolation; mapping pixel points of a target object in the region of interest to a three-dimensional coordinate system of a three-dimensional space, determining a three-dimensional point cloud set corresponding to the target object, carrying out clustering processing on the three-dimensional point cloud set, determining a clustering center of the three-dimensional point cloud set, determining an edge contour and a point cloud connection point corresponding to the three-dimensional point cloud set by combining an edge detection algorithm, and carrying out three-dimensional reconstruction on the target object according to the clustering center, the edge contour and the point cloud connection point.

Description

Sonar water imaging beam forming method, system, equipment and medium
Technical Field
The invention relates to a sonar technology, in particular to a sonar water body imaging beam forming method, a sonar water body imaging beam forming system, sonar water body imaging beam forming equipment and a sonar water body imaging beam forming medium.
Background
The multi-beam system is an efficient submarine detection tool, and a plurality of submarine point water depth values can be obtained through observation of one transmitting and receiving period, so that submarine topography and topography can be depicted; besides the water depth data, the multi-beam system can also obtain the intensity data of the submarine points through a measuring mode of actively transmitting acoustic signals, and the submarine sound image drawn according to the intensity data has important significance for revealing the submarine substrate composition.
The new generation multi-beam sonar system has the function of recording water body data, the water body data information quantity is rich, and a plurality of different imaging modes can be formed through combination and projection among the multi-beam water body sampling points so as to observe the water body target condition from different angles. Compared with the water depth data and the intensity data, the multi-beam water body image data size is larger, the information is more abundant, and the defects that the multi-beam sonar is incomplete in target detection and details are easy to miss can be overcome.
However, the water body data synchronously collected by the multi-beam sonar system is hundreds or even the dry times of the water depth data, and huge data volume brings inconvenience to water body data processing. In addition, the multi-beam water body image is an acoustic imaging mode, a plurality of acoustic beams formed by the equipment at the same time can generate certain mutual influence in the process of transmitting and receiving underwater acoustic signals, and imaging influence factors are complex.
Disclosure of Invention
The embodiment of the invention provides a sonar water body imaging beam forming method, a sonar water body imaging beam forming system, sonar water body imaging beam forming equipment and a sonar water body imaging beam forming medium, which at least can solve part of problems in the prior art.
In a first aspect of an embodiment of the present invention,
The sonar water body imaging beam forming method comprises the following steps:
The sonar system transmits sound wave signals to a target area through a transmitter, receives reflected signals through a receiver, and performs signal decomposition and signal conversion on the reflected signals to determine multi-beam water body images corresponding to the reflected signals;
Dividing the multi-beam water body image into a mixed area and a background area, carrying out target object detection on the multi-beam water body image by combining a local energy function of each pixel point in a target area contained in the mixed area through a target detection algorithm combining view conversion and image interpolation, and determining an interested area where the target object is located;
Mapping pixel points of a target object in the region of interest to a three-dimensional coordinate system of a three-dimensional space, determining a three-dimensional point cloud set corresponding to the target object, carrying out clustering processing on the three-dimensional point cloud set, determining a clustering center of the three-dimensional point cloud set, determining an edge contour and a point cloud connection point corresponding to the three-dimensional point cloud set by combining an edge detection algorithm, and carrying out three-dimensional reconstruction on the target object according to the clustering center, the edge contour and the point cloud connection point.
In an alternative embodiment of the present invention,
Dividing the multi-beam water body image into a mixed area and a background area, and carrying out target object detection on the multi-beam water body image by combining a local energy function of each pixel point in a target area contained in the mixed area through a target detection algorithm combining view conversion and image interpolation, wherein the target detection algorithm comprises the following steps:
Converting the multi-beam water body image into a multi-beam water body gray level image, and dividing the multi-beam water body gray level image into a mixed area and a background area by threshold segmentation through maximum inter-class variance based on gray level values of each pixel point in the multi-beam water body gray level image, wherein the mixed area comprises a target area and a sidelobe noise area;
Sampling the mixed region in sequence, determining a back scattering intensity value of each sampling point, and filtering out side lobe noise of the mixed region to obtain a target region according to the average value and standard deviation of an angle sequence of the mixed region and side lobe characteristic parameters of the mixed region, introducing a noise intensity compression factor;
randomly selecting any pixel point in the target area as a seed pixel point, determining a neighbor area of the seed pixel point, determining a local energy function of each pixel point in the target area according to the pixel intensity of each pixel point in the neighbor area and the approximate contour intensity corresponding to the seed pixel point, introducing a length term coefficient, and taking an area formed by the pixel points with the minimized local energy functions as an area where the target object is located.
In an alternative embodiment of the present invention,
According to the pixel intensity of each pixel point in the neighbor region and the approximation contour intensity corresponding to the seed pixel point, and introducing a length term coefficient, determining the local energy function of each pixel point in the target region comprises:
Wherein ε X(c,f1(x),f2 (x)) represents the local energy function value, f 1 (x) represents the pixel intensity of pixel point x, f 2 (x) represents the approximate contour intensity, λ i represents the length term coefficient, Ω i represents the neighbor region, k σ (x-y) is a gaussian kernel function, represents the weight of distance |x-y|, I (y) represents the original image intensity at pixel y, and f i (x) represents the local feature of each pixel point x in the target region.
In an alternative embodiment of the present invention,
Mapping the pixel points of the target object in the region of interest to a three-dimensional coordinate system of a three-dimensional space, determining a three-dimensional point cloud set corresponding to the target object, and clustering the three-dimensional point cloud set, wherein determining a clustering center of the three-dimensional point cloud set comprises:
determining an adjacent matrix corresponding to any two vertexes in the three-dimensional point cloud set through the initial connectivity of the any two vertexes;
Determining K-order adjacency connectivity and K-order self connectivity corresponding to any two vertexes based on the adjacency matrix, and obtaining a clustering center through a comparison formula, wherein the comparison formula is as follows:
Wherein, Is vertex/>And/>K-th order adjacency connectivity of/(Is the vertexFor each K, a clustering center can be obtained by searching points meeting the formula;
and distributing each vertex in the three-dimensional point cloud set to a corresponding clustering center to form a clustering cluster.
In an alternative embodiment of the present invention,
Determining the edge contour and the point cloud connection point corresponding to the three-dimensional point cloud set by combining an edge detection algorithm, and performing three-dimensional reconstruction on the target object according to the clustering center, the edge contour and the point cloud connection point comprises:
Determining the coordinate relative relation between the sonar system and the target object according to initial coordinate information when the sonar system transmits sound wave signals to the target area at a plurality of moments and fixed coordinate information of the target object, and extracting an edge contour corresponding to the three-dimensional point cloud set by combining an edge detection algorithm;
Sliding selection is carried out on any point in the edge contour in a corresponding neighborhood of the point through a sliding window, and the point with the maximum point cloud density in the edge contour is used as a point cloud connecting point;
Judging the position relation between the point cloud connection point and the edge contour according to the distance between the point cloud connection point and the clustering center, if the distance between the point cloud connection point and the clustering center is smaller than a preset distance threshold, determining that the point cloud connection point and the edge contour are in the same area, connecting the point cloud connection point and the edge contour, and repeating iteration until three-dimensional reconstruction of the target object is completed.
In a second aspect of an embodiment of the present invention,
Provided is a sonar water body imaging beam forming system, comprising:
The first unit is used for transmitting sound wave signals to a target area through a transmitter by the sonar system, receiving reflected signals through a receiver, and carrying out signal decomposition and signal conversion on the reflected signals to determine multi-beam water body images corresponding to the reflected signals;
The second unit is used for dividing the multi-beam water body image into a mixed area and a background area, carrying out target object detection on the multi-beam water body image by combining a local energy function of each pixel point in a target area contained in the mixed area through a target detection algorithm combining view conversion and image interpolation, and determining an interested area where the target object is located;
And the third unit is used for mapping the pixel points of the target object in the region of interest to a three-dimensional coordinate system of a three-dimensional space, determining a three-dimensional point cloud set corresponding to the target object, carrying out clustering processing on the three-dimensional point cloud set, determining a clustering center of the three-dimensional point cloud set, determining an edge contour and a point cloud connecting point corresponding to the three-dimensional point cloud set by combining an edge detection algorithm, and carrying out three-dimensional reconstruction on the target object according to the clustering center, the edge contour and the point cloud connecting point.
In a third aspect of an embodiment of the present invention,
Provided is a sonar water body imaging beam forming device, comprising:
A processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to perform the method described previously.
In a fourth aspect of an embodiment of the present invention,
There is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method as described above.
In the embodiment, the multi-beam water gray scale map is divided into the mixed region and the background region through the maximum inter-class variance, so that the base is provided for the subsequent processing, the mixed region is sampled, the sidelobe noise in the mixed region is effectively filtered out according to the mean value and standard deviation of the angle sequence, the sidelobe characteristic parameter, the noise intensity compression factor and other factors, the target region is extracted, the influence of the noise in the image on the subsequent processing is reduced, the region where the target object is located is successfully determined through the minimum of a plurality of local energy functions, the target region is accurately extracted, other possible interference is eliminated, and in combination, the target region in the multi-beam water image can be reliably extracted, the background and the noise are effectively removed, and high-quality input data is provided for the subsequent image analysis and processing.
Drawings
FIG. 1 is a schematic flow chart of a sonar water body imaging beam forming method according to an embodiment of the invention;
fig. 2 is a schematic structural diagram of a sonar water body imaging beam forming system according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The technical scheme of the invention is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
Fig. 1 is a schematic flow chart of a sonar water body imaging beam forming method according to an embodiment of the present invention, as shown in fig. 1, the method includes:
S101, a sonar system transmits sound wave signals to a target area through a transmitter, receives reflected signals through a receiver, and performs signal decomposition and signal conversion on the reflected signals to determine multi-beam water body images corresponding to the reflected signals;
Illustratively, the emitter of the sonar system is used to emit acoustic signals to the target area. The emitted sound waves should have sufficient frequency and energy to propagate in the body of water and reflect off of the target object. The acoustic signals reflected from the target area are received by a receiver of the sonar system. These reflected signals contain information that interacts with different target objects in the body of water. Signal decomposition of the received reflected signal involves decomposing the received signal into different frequency components for better analysis of various targets in the body of water.
Wherein each of the decomposed signals is converted into a multi-beam form. Multi-beam format refers to decomposing received signals in different directions to obtain location information of a target in a body of water, which involves beamforming the signals, i.e. weighting and combining the signals in different directions. And generating a multi-beam water body image by combining information in different directions by using the converted signals.
Optionally, the received signal is pre-processed, including filtering, denoising, etc., to ensure that subsequent processing steps are able to analyze the signal more accurately. The preprocessed signals are subjected to time-frequency analysis, which may use fourier transform or other techniques, to help understand the frequency components in the signals and their changes over time. The time-frequency analyzed signal is decomposed into a plurality of sub-beams, each sub-beam corresponding to the propagation of sound waves in a different direction. A beamforming algorithm, such as a beamforming filter, is applied to each sub-beam, the goal of the beamforming being to boost the target signal in each direction and suppress spurious signals from the other directions. Adding all sub-beam stacks into one multi-beam image:
each sub-beam is weighted and time delay synthesis is performed on each sub-beam in consideration of the time delay of the target in different directions. This is to ensure that beams from different directions are properly aligned during synthesis. And performing phase calibration to ensure that the phases of all the beams are consistent, and combining the weighted sub-beam stacks after time delay synthesis and phase calibration into a multi-beam image.
Specifically, fourier transform may be used to decompose the reflected signal into different frequency components, and frequency spectrum information is obtained for frequency domain data obtained by the fourier transform, the frequency spectrum showing components of the signal at different frequencies; the convolution integral method is selected as a beam forming algorithm, a convolution filter is designed according to the acoustic propagation characteristics and system parameters of the water body, the designed filter is applied to frequency information, signals in each direction are subjected to beam forming, the beam forming results are subjected to weighted superposition, the intensity distribution of targets in different directions is considered, the generated multi-beam water body image is optimized, such as denoising and contrast enhancement, and the optimized multi-beam water body image is displayed in a visual mode so that a user can analyze the water body structure and the target distribution.
S102, dividing the multi-beam water body image into a mixed area and a background area, and determining an interested area where the target object is located by combining a local energy function of each pixel point in the target area contained in the mixed area through a target detection algorithm combining view conversion and image interpolation;
in an alternative embodiment of the present invention,
Dividing the multi-beam water body image into a mixed area and a background area, carrying out target object detection on the multi-beam water body image by combining a local energy function of each pixel point in a target area contained in the mixed area through a target detection algorithm combining view conversion and image interpolation, wherein determining an interested area where the target object is located comprises the following steps:
Converting the multi-beam water body image into a multi-beam water body gray level image, and dividing the multi-beam water body gray level image into a mixed area and a background area by threshold segmentation through maximum inter-class variance based on gray level values of each pixel point in the multi-beam water body gray level image, wherein the mixed area comprises a target area and a sidelobe noise area;
Sampling the mixed region in sequence, determining a back scattering intensity value of each sampling point, and filtering out side lobe noise of the mixed region to obtain a target region according to the average value and standard deviation of an angle sequence of the mixed region and side lobe characteristic parameters of the mixed region, introducing a noise intensity compression factor;
randomly selecting any pixel point in the target area as a seed pixel point, determining a neighbor area of the seed pixel point, determining a local energy function of each pixel point in the target area according to the pixel intensity of each pixel point in the neighbor area and the approximate contour intensity corresponding to the seed pixel point, introducing a length term coefficient, and taking an area formed by the pixel points with the minimized local energy functions as an area where the target object is located.
Illustratively, converting the multi-beam water image to a gray scale image may be accomplished by averaging three channels of the color image or employing some particular graying algorithm; and calculating a gray level histogram of the multi-beam water gray level map, wherein the gray level histogram represents the distribution condition of different gray levels in the image. Determining a threshold value by using a maximum inter-class variance method (Otsu's method) or other threshold selection algorithm, and dividing the multi-beam water gray scale map into a mixed region and a background region, wherein the maximum inter-class variance method aims at selecting the threshold value so as to maximize the variance between the two divided classes;
And according to the selected threshold value, carrying out threshold segmentation on the multi-beam water gray level map to obtain a binary image. In the binary image, the pixel value is greater than the threshold value and belongs to the mixed region, and the pixel value is less than or equal to the threshold value and belongs to the background region.
Extracting gray values for each pixel point in the multi-beam water body image, forming a multi-beam water body gray image based on the gray value corresponding to each pixel point, calculating the gray value of each pixel point in the multi-beam water body gray image, carrying out maximum inter-class variance threshold segmentation on the multi-beam water body gray image based on the gray values, segmenting a gray range and setting a plurality of thresholds based on the difference value of the gray values in the multi-beam water body gray image, calculating a gray histogram of the whole image, calculating inter-class variances in combination with the gray histogram for each threshold, and selecting the threshold with the maximum inter-class variance value to segment the multi-beam water body gray image to obtain a mixed region and a background region, wherein the mixed region comprises a target region and a sidelobe noise region;
Illustratively, the backscatter intensity value is used to measure the reflected intensity of an object or region against sound waves, and is typically used to describe the brightness or reflective properties of the object. Angular sequence mean and standard deviation of the blend region: the angle sequence refers to a set of angle values of each sampling point in the mixing area relative to the direction of the sonar system, and the average value and standard deviation reflect the average position and the distribution range of the targets or objects in the mixing area in terms of angles. Sidelobe characteristic parameters of the mixing region: the sidelobe characteristic parameters comprise sidelobe intensity, sidelobe width and the like of the reflected signals in the mixing region, and are used for describing the characteristics of stray signals or sidelobe noise in the mixing region. Noise intensity compression factor (Noise Intensity Compression Factor): for compressing the noise strength in the mixing region to improve the recognition of the target signal, which factor can adjust the effect of noise on target detection.
Sampling the mixed region to obtain a series of sampling points, and calculating a backscatter intensity value of each sampling point according to a radar equation and the scattering characteristics of the target, wherein the method for calculating the backscatter intensity value is shown as the following formula:
Where BSI i represents the backscatter intensity value of the i-th sampling point, P i represents the received signal power of the i-th sampling point, and P 0 represents the reference power.
Wherein,
Wherein,Representing the mean of the angular sequence,/>Represents the standard deviation of the angle sequence, N represents the number of sampling points,/>The angle value representing the i-th sampling point.
In an alternative embodiment, the backscattering intensity value of each sampling point is determined, and the sidelobe noise of the mixed region is determined according to the mean value and standard deviation of the angle sequence of the mixed region and the sidelobe characteristic parameters of the mixed region, and a noise intensity compression factor is introduced, wherein the sidelobe noise of the mixed region is determined as shown in the following formula:
wherein C represents a noise intensity compression factor, Representing the sidelobe noise of the ith sample point.
And carrying out Fourier transform on the radar signal data according to the radar signal data of the mixed region to obtain power spectrum density, searching secondary peaks in the power spectrum density diagram to obtain the position of side lobes, carrying out feature calculation on the side lobes through a peak detection algorithm to obtain side lobe feature parameters of the mixed region, and inhibiting side lobe noise of the mixed region by combining a noise intensity compression factor introduced in advance to obtain the target region.
Randomly selecting a pixel point in the target area as a seed pixel point, defining a neighbor area of the seed pixel point, wherein the neighbor area can be a circular area or a square area with a fixed radius and the seed pixel point as a center, taking an original gray value of the pixel point as a pixel intensity corresponding to the pixel point for each pixel point in the neighbor area, calculating according to the pixel intensity and the approximate contour intensity and combining a length term coefficient which is introduced in advance to obtain a local energy function of each pixel point in the target area, finding out the pixel point which minimizes the local energy function, forming the area where the target object is located by the pixel points, taking the newly added pixel point as a new seed pixel point, and repeating the process until the target area is not expanded.
Illustratively, a Pixel Intensity (Pixel Intensity) is used to indicate a gray value or a color value of each Pixel point, which is used to represent brightness or color information in an image. The approximated contour intensity (Contour Approximation Strength) is used to indicate the intensity or feature of the approximated contour corresponding to the seed pixel, which may be the gradient of the contour, edge information, etc., used to quantify the shape and feature of the contour. The length term Coefficient (LENGTH TERM coeffient) is used to balance the smoothness and accuracy of the target region, the length term Coefficient controls the smoothness of the target region boundary, larger coefficients promote smoother boundaries, and smaller coefficients focus more on matching pixel intensities and contour intensities. A local energy function (Local Energy Function) is used to measure the suitability of each pixel point in the target region, the local energy function typically consisting of a combination of pixel intensity, approximation contour intensity, and length term coefficients.
In an alternative embodiment, determining the approximate contour intensity corresponding to the seed pixel point may include Canny edge detection of the seed pixel point to capture edge information in the seed pixel point. The Canny edge detection algorithm generally comprises the steps of: smoothing by using Gaussian filtering, calculating the gradient of the seed pixel point, and finding the amplitude and the direction of the gradient; non-maximum suppression is performed on the gradient magnitude to preserve local maxima on the edge, and dual threshold values are used for edge binarization.
The result obtained by Canny edge detection is subjected to contour searching, and a common contour searching algorithm comprises findContours functions in OpenCV. For each found contour, the contour curve is approximated to a simpler line segment or polygon by using a Douglas-Peucker algorithm or other contour approximation method, the intensity of the approximated contour is calculated, which may be an average gray value of points on the contour, an average amplitude of gradient values, etc., an appropriate metric is selected according to the specific application, and for each contour, the approximated contour intensity thereof may be calculated, for example, using an average gradient amplitude of points on the contour.
In the embodiment, the multi-beam water gray scale map is divided into the mixed region and the background region through the maximum inter-class variance, so that the base is provided for the subsequent processing, the mixed region is sampled, the sidelobe noise in the mixed region is effectively filtered out according to the mean value and standard deviation of the angle sequence, the sidelobe characteristic parameter, the noise intensity compression factor and other factors, the target region is extracted, the influence of the noise in the image on the subsequent processing is reduced, the region where the target object is located is successfully determined through the minimum of a plurality of local energy functions, the target region is accurately extracted, other possible interference is eliminated, and in combination, the target region in the multi-beam water image can be reliably extracted, the background and the noise are effectively removed, and high-quality input data is provided for the subsequent image analysis and processing.
In an alternative embodiment of the present invention,
According to the pixel intensity of each pixel point in the neighbor region and the approximation contour intensity corresponding to the seed pixel point, and introducing a length term coefficient, determining the local energy function of each pixel point in the target region comprises:
Wherein ε X(c,f1(x),f2 (x)) represents the local energy function value, f 1 (x) represents the pixel intensity of pixel point x, f 2 (x) represents the approximate contour intensity, λ i represents the length term coefficient, Ω i represents the neighbor region, k σ (x-y) is a gaussian kernel function, represents the weight of distance |x-y|, I (y) represents the original image intensity at pixel y, and f i (x) represents the local feature of each pixel point x in the target region.
In the function, the pixel intensity of the pixel point x is considered, so that the brightness characteristic related to the target is reserved, the target area is more remarkable in intensity, the edge information of the target area can be better captured by introducing the approximate contour intensity, the shape of the target is accurately depicted, the sensitivity of the local energy function to the area boundary can be controlled by the system by adjusting the length term coefficient, the balance between the shape characteristic and the smoothness is achieved, the information of surrounding pixels is considered in consideration of the calculation of the local energy function which is helpful to the neighbor area, the context information of the target area is better reflected, and in conclusion, the function can accurately extract the local characteristic of the target area on the basis of considering the information in multiple aspects such as brightness, edge, shape and the like, so that the target area is finally determined.
S103, mapping pixel points of a target object in the region of interest to a three-dimensional coordinate system of a three-dimensional space, determining a three-dimensional point cloud set corresponding to the target object, carrying out clustering processing on the three-dimensional point cloud set, determining a clustering center of the three-dimensional point cloud set, determining an edge contour and a point cloud connection point corresponding to the three-dimensional point cloud set by combining an edge detection algorithm, and carrying out three-dimensional reconstruction on the target object according to the clustering center, the edge contour and the point cloud connection point.
In an alternative embodiment of the present invention,
Mapping the pixel points of the target object in the region of interest to a three-dimensional coordinate system of a three-dimensional space, determining a three-dimensional point cloud set corresponding to the target object, and clustering the three-dimensional point cloud set, wherein determining a clustering center of the three-dimensional point cloud set comprises:
determining an adjacent matrix corresponding to any two vertexes in the three-dimensional point cloud set through the initial connectivity of the any two vertexes;
Determining K-order adjacency connectivity and K-order self connectivity corresponding to any two vertexes based on the adjacency matrix, and obtaining a clustering center through a comparison formula, wherein the comparison formula is as follows:
Wherein, Is vertex/>And/>K-th order adjacency connectivity of/(Is the vertexFor each K, a clustering center can be obtained by searching points meeting the formula;
and distributing each vertex in the three-dimensional point cloud set to a corresponding clustering center to form a clustering cluster.
By way of example only, and in an illustrative,
A point is randomly selected as the initial cluster center and then the similarity of each point to this center is calculated. Points having a similarity to the center greater than a certain threshold are assigned to the same class of clusters. For each point allocated to the cluster, calculating the similarity between the point and the center of the cluster, and taking the point with the highest similarity between the point and the center as a new center. This step is repeated until the new cluster center is no longer changed or changes little. The similarity is determined by calculating the distance between different clusters, and if the distance between two clusters is smaller than a certain threshold value, the clusters are combined into the same cluster.
The processed sonar data still exists in the form of point cloud collection, but the quantity of target point clouds detected by the sonar can be influenced due to the difference of different targets in terms of volume, shape, reflection characteristics, distance between the targets and the sonar and the like. Meanwhile, if the distance between the plurality of targets is small, the point cloud may appear as a large cluster, thereby affecting the detection effect.
Wherein, the initial connectivity of any two vertices in the point cloud set can be expressed as:
Wherein, Is vertex/>And vertex/>Connectivity of/>And/>For two different vertices,/>Variance as gaussian kernel;
the clustering center is obtained through a comparison formula, and the comparison formula specifically comprises:
Wherein the method comprises the steps of Is vertex/>And/>K-th order adjacency connectivity of/(Is vertex/>For each K, a clustering center can be obtained by searching points meeting the formula;
the assigning each vertex in the point cloud set to a corresponding cluster center specifically comprises:
Wherein the method comprises the steps of For other vertices in the set of point clouds, i.e. vertices to be assigned,/>For evaluating functions, the method is used for measuring the correlation between vertexes to be allocated and clustering centersIs a mathematical function that represents taking a parameter that maximizes the value of the function, here for selecting the cluster center that is most relevant to the vertex.
In an alternative embodiment of the present invention,
Determining the edge contour and the point cloud connection point corresponding to the three-dimensional point cloud set by combining an edge detection algorithm, and performing three-dimensional reconstruction on the target object according to the clustering center, the edge contour and the point cloud connection point comprises:
Determining the coordinate relative relation between the sonar system and the target object according to initial coordinate information when the sonar system transmits sound wave signals to the target area at a plurality of moments and fixed coordinate information of the target object, and extracting an edge contour corresponding to the three-dimensional point cloud set by combining an edge detection algorithm;
Sliding selection is carried out on any point in the edge contour in a corresponding neighborhood of the point through a sliding window, and the point with the maximum point cloud density in the edge contour is used as a point cloud connecting point;
Judging the position relation between the point cloud connection point and the edge contour according to the distance between the point cloud connection point and the clustering center, if the distance between the point cloud connection point and the clustering center is smaller than a preset distance threshold, determining that the point cloud connection point and the edge contour are in the same area, connecting the point cloud connection point and the edge contour, and repeating iteration until three-dimensional reconstruction of the target object is completed.
Illustratively, sonar data is obtained from a sonar system at a plurality of times, including transmitting acoustic signals and receiving reflected signals, and for each time, initial coordinate information of the sonar system is recorded. Determining the coordinate relative relation between the sonar system and the target object by utilizing the initial coordinate information and the fixed coordinate information of the target object; edge detection is carried out on a target object in sonar data by using an edge detection algorithm, and edge contours of a three-dimensional point cloud set are extracted, wherein the edge detection can be carried out by using gradient operators, canny operators and the like; for each point in the edge contour, a sliding window is used in its corresponding neighborhood, and the point with the greatest point cloud density is selected as the point cloud connection point, which helps to determine the key connection point on the edge contour.
For each point cloud connection point, calculating the distance between the point cloud connection point and the clustering center, and if the distance is smaller than a preset distance threshold value, identifying that the point cloud connection point and the clustering center are in the same area, and connecting the point cloud connection point and the clustering center; repeating the steps, and iterating until the three-dimensional reconstruction of the target object is completed. In each iteration, the coordinate relationship may need to be updated, edge detection may need to be performed again, a new point cloud connection point may be selected, and distance judgment and connection may be performed. And after all the steps are completed, obtaining a connected three-dimensional point cloud set, namely a three-dimensional reconstruction result of the target object.
Fig. 2 is a schematic structural diagram of a sonar water body imaging beam forming system according to an embodiment of the present invention, as shown in fig. 2, the system includes:
The first unit is used for transmitting sound wave signals to a target area through a transmitter by the sonar system, receiving reflected signals through a receiver, and carrying out signal decomposition and signal conversion on the reflected signals to determine multi-beam water body images corresponding to the reflected signals;
The second unit is used for dividing the multi-beam water body image into a mixed area and a background area, carrying out target object detection on the multi-beam water body image by combining a local energy function of each pixel point in a target area contained in the mixed area through a target detection algorithm combining view conversion and image interpolation, and determining an interested area where the target object is located;
And the third unit is used for mapping the pixel points of the target object in the region of interest to a three-dimensional coordinate system of a three-dimensional space, determining a three-dimensional point cloud set corresponding to the target object, carrying out clustering processing on the three-dimensional point cloud set, determining a clustering center of the three-dimensional point cloud set, determining an edge contour and a point cloud connecting point corresponding to the three-dimensional point cloud set by combining an edge detection algorithm, and carrying out three-dimensional reconstruction on the target object according to the clustering center, the edge contour and the point cloud connecting point.
The present invention may be a method, apparatus, system, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for performing various aspects of the present invention.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (6)

1. The sonar water body imaging beam forming method is characterized by comprising the following steps of:
The sonar system transmits sound wave signals to a target area through a transmitter, receives reflected signals through a receiver, and performs signal decomposition and signal conversion on the reflected signals to determine multi-beam water body images corresponding to the reflected signals;
Dividing the multi-beam water body image into a mixed area and a background area, carrying out target object detection on the multi-beam water body image by combining a local energy function of each pixel point in a target area contained in the mixed area through a target detection algorithm combining view conversion and image interpolation, and determining an interested area where the target object is located;
Mapping pixel points of a target object in the region of interest to a three-dimensional coordinate system of a three-dimensional space, determining a three-dimensional point cloud set corresponding to the target object, carrying out clustering processing on the three-dimensional point cloud set, determining a clustering center of the three-dimensional point cloud set, determining an edge contour and a point cloud connection point corresponding to the three-dimensional point cloud set by combining an edge detection algorithm, and carrying out three-dimensional reconstruction on the target object according to the clustering center, the edge contour and the point cloud connection point;
Determining the edge contour and the point cloud connection point corresponding to the three-dimensional point cloud set by combining an edge detection algorithm, and performing three-dimensional reconstruction on the target object according to the clustering center, the edge contour and the point cloud connection point comprises:
Determining the coordinate relative relation between the sonar system and the target object according to initial coordinate information when the sonar system transmits sound wave signals to the target area at a plurality of moments and fixed coordinate information of the target object, and extracting an edge contour corresponding to the three-dimensional point cloud set by combining an edge detection algorithm;
Sliding selection is carried out on any point in the edge contour in a corresponding neighborhood of the point through a sliding window, and the point with the maximum point cloud density in the edge contour is used as a point cloud connecting point;
for each point cloud connection point, calculating the distance between the point cloud connection point and the clustering center, and if the distance is smaller than a preset distance threshold value, determining that the point cloud connection point and the clustering center are in the same area, and connecting the point cloud connection points and the clustering center;
Repeatedly judging whether the point cloud connection points are in the same area with the clustering center, iterating continuously, updating the coordinate relation, carrying out edge detection again in each iteration, selecting new point cloud connection points, and carrying out distance judgment and connection; and after all the steps are completed, a connected three-dimensional point cloud set is obtained, and three-dimensional reconstruction of the target object is completed.
2. The method of claim 1, wherein dividing the multi-beam water image into a mixed region and a background region, and performing target object detection on the multi-beam water image by a target detection algorithm combining view conversion and image interpolation in combination with a local energy function based on each pixel point in a target region included in the mixed region comprises:
Converting the multi-beam water body image into a multi-beam water body gray level image, and dividing the multi-beam water body gray level image into a mixed area and a background area by threshold segmentation through maximum inter-class variance based on gray level values of each pixel point in the multi-beam water body gray level image, wherein the mixed area comprises a target area and a sidelobe noise area;
Sampling the mixed region in sequence, determining a back scattering intensity value of each sampling point, and filtering out side lobe noise of the mixed region to obtain a target region according to the average value and standard deviation of an angle sequence of the mixed region and side lobe characteristic parameters of the mixed region, introducing a noise intensity compression factor;
randomly selecting any pixel point in the target area as a seed pixel point, determining a neighbor area of the seed pixel point, determining a local energy function of each pixel point in the target area according to the pixel intensity of each pixel point in the neighbor area and the approximate contour intensity corresponding to the seed pixel point, introducing a length term coefficient, and taking an area formed by the pixel points with the minimized local energy functions as an area where the target object is located.
3. The method of claim 1, wherein mapping pixels of a target object in the region of interest to a three-dimensional coordinate system of a three-dimensional space, determining a three-dimensional point cloud set corresponding to the target object, and clustering the three-dimensional point cloud set, determining a cluster center of the three-dimensional point cloud set comprises:
determining an adjacent matrix corresponding to any two vertexes in the three-dimensional point cloud set through the initial connectivity of the any two vertexes;
Determining K-order adjacency connectivity and K-order self connectivity corresponding to any two vertexes based on the adjacency matrix, and obtaining a clustering center through a comparison formula, wherein the comparison formula is as follows:
Wherein, Is vertex/>And/>K-th order adjacency connectivity of/(Is vertex/>For each K, a clustering center can be obtained by searching points meeting the formula;
and distributing each vertex in the three-dimensional point cloud set to a corresponding clustering center to form a clustering cluster.
4. A sonar water body imaging beam forming system for implementing a sonar water body imaging beam forming method of any of the preceding claims 1-3, comprising:
The first unit is used for transmitting sound wave signals to a target area through a transmitter by the sonar system, receiving reflected signals through a receiver, and carrying out signal decomposition and signal conversion on the reflected signals to determine multi-beam water body images corresponding to the reflected signals;
The second unit is used for dividing the multi-beam water body image into a mixed area and a background area, carrying out target object detection on the multi-beam water body image by combining a local energy function of each pixel point in a target area contained in the mixed area through a target detection algorithm combining view conversion and image interpolation, and determining an interested area where the target object is located;
And the third unit is used for mapping the pixel points of the target object in the region of interest to a three-dimensional coordinate system of a three-dimensional space, determining a three-dimensional point cloud set corresponding to the target object, carrying out clustering processing on the three-dimensional point cloud set, determining a clustering center of the three-dimensional point cloud set, determining an edge contour and a point cloud connecting point corresponding to the three-dimensional point cloud set by combining an edge detection algorithm, and carrying out three-dimensional reconstruction on the target object according to the clustering center, the edge contour and the point cloud connecting point.
5. A sonar water body imaging beam forming device, comprising:
A processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to perform the method of any of claims 1 to 3.
6. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1 to 3.
CN202410032299.2A 2024-01-10 2024-01-10 Sonar water imaging beam forming method, system, equipment and medium Active CN117538881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410032299.2A CN117538881B (en) 2024-01-10 2024-01-10 Sonar water imaging beam forming method, system, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410032299.2A CN117538881B (en) 2024-01-10 2024-01-10 Sonar water imaging beam forming method, system, equipment and medium

Publications (2)

Publication Number Publication Date
CN117538881A CN117538881A (en) 2024-02-09
CN117538881B true CN117538881B (en) 2024-05-07

Family

ID=89790421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410032299.2A Active CN117538881B (en) 2024-01-10 2024-01-10 Sonar water imaging beam forming method, system, equipment and medium

Country Status (1)

Country Link
CN (1) CN117538881B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971406A (en) * 2014-05-09 2014-08-06 青岛大学 Underwater target three-dimensional reconstruction method based on line structured light
CN105139445A (en) * 2015-08-03 2015-12-09 百度在线网络技术(北京)有限公司 Scenario reconstruction method and apparatus
CN110053743A (en) * 2019-04-27 2019-07-26 扆亮海 A kind of remote-controlled robot for accurately measuring under water
CN111144318A (en) * 2019-12-27 2020-05-12 苏州联视泰电子信息技术有限公司 Point cloud data noise reduction method for underwater sonar system
CN112529841A (en) * 2020-11-16 2021-03-19 中国海洋大学 Method and system for processing seabed gas plume in multi-beam water column data and application
CN112539886A (en) * 2020-11-16 2021-03-23 中国海洋大学 Submarine gas plume extraction method based on image processing mode multi-beam sonar water column data and application
CN112837331A (en) * 2021-03-08 2021-05-25 电子科技大学 Fuzzy three-dimensional SAR image target extraction method based on self-adaptive morphological reconstruction
CN113567968A (en) * 2021-05-25 2021-10-29 自然资源部第一海洋研究所 Underwater target real-time segmentation method based on shallow water multi-beam water depth data and application
US11182924B1 (en) * 2019-03-22 2021-11-23 Bertec Corporation System for estimating a three dimensional pose of one or more persons in a scene
CN115761550A (en) * 2022-12-20 2023-03-07 江苏优思微智能科技有限公司 Water surface target detection method based on laser radar point cloud and camera image fusion
CN116520335A (en) * 2023-06-29 2023-08-01 海底鹰深海科技股份有限公司 Multi-receiving array element synthetic aperture sonar wave number domain imaging method
CN116953674A (en) * 2023-09-21 2023-10-27 海底鹰深海科技股份有限公司 Rapid target detection algorithm in sonar imaging
CN117075092A (en) * 2023-09-05 2023-11-17 海底鹰深海科技股份有限公司 Underwater sonar side-scan image small target detection method based on forest algorithm
CN117214904A (en) * 2023-09-06 2023-12-12 北京林业大学 Intelligent fish identification monitoring method and system based on multi-sensor data
CN117372827A (en) * 2023-10-17 2024-01-09 海底鹰深海科技股份有限公司 Sonar image statistics enhancement algorithm based on boundary constraint

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971406A (en) * 2014-05-09 2014-08-06 青岛大学 Underwater target three-dimensional reconstruction method based on line structured light
CN105139445A (en) * 2015-08-03 2015-12-09 百度在线网络技术(北京)有限公司 Scenario reconstruction method and apparatus
US11182924B1 (en) * 2019-03-22 2021-11-23 Bertec Corporation System for estimating a three dimensional pose of one or more persons in a scene
CN110053743A (en) * 2019-04-27 2019-07-26 扆亮海 A kind of remote-controlled robot for accurately measuring under water
CN111144318A (en) * 2019-12-27 2020-05-12 苏州联视泰电子信息技术有限公司 Point cloud data noise reduction method for underwater sonar system
CN112529841A (en) * 2020-11-16 2021-03-19 中国海洋大学 Method and system for processing seabed gas plume in multi-beam water column data and application
CN112539886A (en) * 2020-11-16 2021-03-23 中国海洋大学 Submarine gas plume extraction method based on image processing mode multi-beam sonar water column data and application
CN112837331A (en) * 2021-03-08 2021-05-25 电子科技大学 Fuzzy three-dimensional SAR image target extraction method based on self-adaptive morphological reconstruction
CN113567968A (en) * 2021-05-25 2021-10-29 自然资源部第一海洋研究所 Underwater target real-time segmentation method based on shallow water multi-beam water depth data and application
CN115761550A (en) * 2022-12-20 2023-03-07 江苏优思微智能科技有限公司 Water surface target detection method based on laser radar point cloud and camera image fusion
CN116520335A (en) * 2023-06-29 2023-08-01 海底鹰深海科技股份有限公司 Multi-receiving array element synthetic aperture sonar wave number domain imaging method
CN117075092A (en) * 2023-09-05 2023-11-17 海底鹰深海科技股份有限公司 Underwater sonar side-scan image small target detection method based on forest algorithm
CN117214904A (en) * 2023-09-06 2023-12-12 北京林业大学 Intelligent fish identification monitoring method and system based on multi-sensor data
CN116953674A (en) * 2023-09-21 2023-10-27 海底鹰深海科技股份有限公司 Rapid target detection algorithm in sonar imaging
CN117372827A (en) * 2023-10-17 2024-01-09 海底鹰深海科技股份有限公司 Sonar image statistics enhancement algorithm based on boundary constraint

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Patel, RC.Segmentation of 3D acoustic images for object recognition purposes.《OCEANS'98 - CONFERENCE PROCEEDINGS》.1998,全文. *
利用多波束水体成像数据进行管道气体泄漏检测;张志刚;郭俊;杨嘉斌;蒲定;***;;应用科技;20180316(第06期);全文 *
基于序列轮廓线的多波束水体影像目标三维重建;陈建兵;《海洋技术学报》;20201231;全文 *
基于模糊聚类的脑磁共振图像分割算法综述;孙权森;纪则轩;;数据采集与处理;20160115(第01期);全文 *
多波束声呐水体影像在中底层水域目标探测中的应用;刘洪霞;《中国优秀博士学位论文全文数据库 基础科学辑》;20210615;第2-4章 *
多波束水体目标提取及三维重建;李东辉;《中国优秀硕士学位论文全文数据库 基础科学辑》;20210615;第2-4章 *

Also Published As

Publication number Publication date
CN117538881A (en) 2024-02-09

Similar Documents

Publication Publication Date Title
US6337654B1 (en) A-scan ISAR classification system and method therefor
Li et al. Automatic detection of ship targets based on wavelet transform for HF surface wavelet radar
CN108872982B (en) Extraction and correction processing method for multiple scattering features in radar target RCS near-far field conversion
CN101482969B (en) SAR image speckle filtering method based on identical particle computation
CN108961255B (en) Sea-land noise scene segmentation method based on phase linearity and power
CN109116352A (en) A kind of circle sweeps ISAR mode ship super-resolution imaging method
CN116908853B (en) High coherence point selection method, device and equipment
Karakuş et al. A generalized Gaussian extension to the Rician distribution for SAR image modeling
CN116740342A (en) Millimeter wave image target detection and identification method for improving YOLO v8
CN112215832B (en) SAR trail image quality evaluation and self-adaptive detection parameter adjustment method
CN117538881B (en) Sonar water imaging beam forming method, system, equipment and medium
CN112164079A (en) Sonar image segmentation method
Li et al. Object representation for multi-beam sonar image using local higher-order statistics
Kondaveeti et al. Abridged shape matrix representation for the recognition of aircraft targets from 2D ISAR imagery
CN108460773B (en) Sonar image segmentation method based on offset field level set
Chen et al. Sand ripple characterization using an extended synthetic aperture sonar model and parallel sampling method
Xuejiao et al. An improved azimuth ambiguity suppression method for SAR based on ambiguity area imaging and detection
Kondaveeti et al. Robust ISAR image classification using abridged shape matrices
CN116930976B (en) Submarine line detection method of side-scan sonar image based on wavelet mode maximum value
Aja-Fernández et al. Tissue identification in ultrasound images using rayleigh local parameter estimation
Gao et al. An Underwater Target Perception Framework for Underwater Operation Scene
Luyuan et al. Sonar Image MRF Segmentation Algorithm Based on Texture Feature Vector
Gu et al. A novel procedure for land masking in ocean-land segmentation from SAR images
Synnes et al. Aspect-dependent scattering in widebeam synthetic aperture sonar
CN114879159A (en) Pretreated sea surface target detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant