CN114155428A - Underwater sonar side-scan image small target detection method based on Yolo-v3 algorithm - Google Patents

Underwater sonar side-scan image small target detection method based on Yolo-v3 algorithm Download PDF

Info

Publication number
CN114155428A
CN114155428A CN202111417536.XA CN202111417536A CN114155428A CN 114155428 A CN114155428 A CN 114155428A CN 202111417536 A CN202111417536 A CN 202111417536A CN 114155428 A CN114155428 A CN 114155428A
Authority
CN
China
Prior art keywords
scan image
yolo
underwater
target
underwater sonar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111417536.XA
Other languages
Chinese (zh)
Inventor
韩志
王艳美
余思泉
唐延东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Institute of Automation of CAS
Original Assignee
Shenyang Institute of Automation of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Institute of Automation of CAS filed Critical Shenyang Institute of Automation of CAS
Priority to CN202111417536.XA priority Critical patent/CN114155428A/en
Publication of CN114155428A publication Critical patent/CN114155428A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention relates to an underwater sonar side-scan image small target detection method based on a YoLo-v3 algorithm. By labeling small targets in the existing underwater sonar side-scan image, the advantages of the YoLo-v3 algorithm in a target detection task are utilized, and the network structure is modified appropriately according to the requirements of the target detection task in the underwater sonar side-scan image, so that the target detection task of the underwater sonar side-scan image is realized. The experimental result verifies the effectiveness of the method in the underwater sonar side-scanning image target detection task.

Description

Underwater sonar side-scan image small target detection method based on Yolo-v3 algorithm
Technical Field
The invention relates to a target detection method, in particular to an underwater sonar side-scan image small target detection method based on a YoLo-v3 algorithm.
Background
In recent years, underwater robots, such as Autonomous Underwater Vehicles (AUVs), Remotely Operated Vehicles (ROVs), and the like, are commonly used for underwater object detection. For near distance target recognition, a vision sensor is typically employed to acquire high quality images. In underwater observation, the visibility of captured underwater images is poor due to scattering of suspended particles in high turbidity water. Therefore, it is urgent to design a cooperative underwater target detection system for a wide range of underwater target detection tasks. The underwater target detection system is used for detecting the underwater target by using sonar images and optical images with rich information, and is widely applied to ocean monitoring in recent years. However, manually analyzing the large amount of underwater sonar image data generated each day is a cumbersome and time-consuming task. Therefore, an automatic target detection and recognition system is of great utility in reducing time-consuming and expensive manual input.
Disclosure of Invention
Aiming at the defects of the technology, the invention aims to provide an underwater sonar side-scan image small-target detection method based on a YoLo-v3 algorithm.
The technical scheme adopted by the invention is as follows:
the underwater sonar side-scan image small target detection method based on the YoLo-v3 algorithm comprises the following steps:
s1, collecting underwater sonar side-scan images with targets in advance, labeling the targets to be detected in the sonar images, and establishing a sonar side-scan image set with target labels;
s2, training by combining an underwater sonar side-scan image based on a YoLo-v3 method, and establishing a target detection network for realizing complementation and accurate detection of optical and acoustic underwater acoustic target detection;
and S3, acquiring an underwater sonar side-scan image with the target in real time, inputting the YoLo-v3 network structure for identification and detection, and acquiring a prediction frame and coordinates of the center of the underwater target.
And the labeling is to carry out fuzzy labeling on the target in the underwater sonar side-scan image by using a rectangular frame.
The training based on the YoLo-v3 method combined with an underwater sonar side-scan image comprises the following steps: training parameters of a Darknet-53 network in a YoLo-v3 network by adopting a fuzzy labeled underwater sonar side-scan image data set, and adjusting the parameters of the Darknet-53 network according to loss function back propagation.
The integrated loss function E ═ E1+E2Wherein E is1A cross entropy loss function representing the true detection box and the predicted target box, E2Representing a coordinate loss function; and stopping the updating iteration of the network parameters when the comprehensive loss function E meets the threshold requirement.
The method for training the parameters of the Darknet-53 network in the YoLo-v3 algorithm by adopting the fuzzy labeled underwater sonar side-scan image data set comprises the following steps: dividing the underwater sonar side-scan image by using S grids, predicting B boundary frames in each grid, detecting the position center of an object by calculating the fraction of each boundary frame, and calculating the fraction of each boundary frame according to the following formula:
Figure BDA0003376153960000021
wherein the content of the first and second substances,
Figure BDA0003376153960000022
is the score, P, of the jth bounding box in the ith latticei,j(object) represents the probability that the detected object is located in the jth bounding box in the ith mesh,
Figure BDA0003376153960000023
and representing the intersection and union ratio between the prediction frame and the object real prediction frame, wherein the intersection and union ratio is the ratio of intersection to union.
And after the underwater sonar side-scan image is segmented by using S-S grids, S-S grid images and a real prediction frame of any grid image are obtained.
Calculating the cross entropy of the real detection frame and the predicted target frame, and using the cross entropy as a loss function to reversely propagate and update the network parameters, wherein the calculation of the loss function can be realized by the following formula:
Figure BDA0003376153960000024
wherein E is1Representing a first loss function, W, for parameter updatingi,jThe weight is represented by a weight that is,
Figure BDA0003376153960000025
the score of the jth real prediction box in the ith mesh is represented.
The network parameters are updated through the coordinate loss function, and the method for updating the network parameters through the coordinate loss function comprises the following steps
Figure BDA0003376153960000026
Wherein E is2Represents a coordinate loss function, σ (-) represents four coordinates t of the jth prediction box in the ith meshx、ty、tw、thThe corresponding function is a function of the number of the functions,
Figure BDA0003376153960000027
the coordinates of the jth corresponding real detection frame in the ith grid are obtained.
The coordinates of the target center are: when the updating iteration is stopped, the center of the jth bounding box in the ith grid is taken as the center of the current underwater sound target as an output result; the prediction frame of the underwater target center is as follows; the four coordinates of the bounding box at this time are labeled as output results.
The invention has the following beneficial effects and advantages:
(1) the method of the invention establishes a large-scale real-side scanning sonar image database. The database contains 7000 samples, which were captured in the real environment.
(2) The invention provides an underwater target detection system based on a side scanning sonar image and a YoLo-v3 network. The system realizes the complementary advantages of optical and acoustic underwater sound target detection, and can detect the underwater sound target more accurately.
(3) The method disclosed by the invention is used for carrying out a large number of experiments in a real underwater environment, and the effect is stable and effective.
Drawings
FIG. 1 is a flow chart of the YoLo-v3 algorithm for underwater target detection;
FIG. 2 is a diagram of the YoLo-v3 algorithm network architecture;
FIG. 3 is an example of an underwater sonar image target detection data set; wherein (a) a cylindrical target, (b) a tubular target, (c) a quadrangular cylindrical target, (d) a quadrangular pyramid target;
fig. 4 is a visualization of detection results in which (a) the result of positioning a cylindrical target, (b) the result of positioning a tubular target, (c) the result of positioning a quadrangular cylindrical target, and (d) the result of positioning a quadrangular pyramid-shaped target;
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, but rather should be construed as modified in the spirit and scope of the present invention as set forth in the appended claims.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
The method needs to establish a data set of the underwater sonar side-scan image. The sonar side-scan images in the real environment are collected, and the data sets correspondingly contain a plurality of pairs of left and right sonar images, since the fields of view of the side-scan sonar images are usually located on both sides of the carrier. The establishment of the data set can provide reliable guarantee and support for subsequent related research.
FIG. 1 shows a flow chart of the method of the present invention. The invention provides an underwater target detection system based on a side scan sonar image and a YoLo-v3 network. The system realizes the complementary advantages of optical and acoustic underwater sound target detection, and can detect the underwater sound target more accurately. The invention solves the technical problem by adopting the technical scheme that the method for detecting the target of the underwater sonar side-scan image based on the YoLo-v3 algorithm comprises the following steps: the target detection method of the underwater sonar side-scan image based on the YoLo-v3 algorithm comprises the following steps: collecting an underwater sonar side-scan image with a target, labeling the target to be detected in the sonar image, and establishing a sonar side-scan image set with target labeling; a target detection system is established based on the YoLo-v3 method and combined with an underwater sonar side-scan image, the system realizes the complementary advantages of optical and acoustic underwater sound target detection, and can detect underwater sound targets more accurately.
1. The YoLo-v3 algorithm is adopted for underwater target detection: the YoLo-v3 algorithm is suitable for engineering research due to its flexibility. The YoLo-v3 algorithm adopts Darknet-53 as a backbone network, so that the detection accuracy is improved, and the backbone network can be replaced by tiny-Darknet to lighten the network model. Parameters of a Darknet-53 network in the YoLo-v3 algorithm are trained by using a fuzzy labeled underwater sonar side-scan image data set. Therefore, the YoLo-v3 algorithm is adopted to process the underwater target detection task.
Two important loss functions are adopted in the underwater target detection task.
Parameters of the Darknet-53 network are adjusted according to the synthetic loss function back propagation. The integrated loss function E ═ E1+E2Wherein E is1A cross entropy loss function representing the true detection box and the predicted target box, E2Representing a coordinate loss function; and stopping the updating iteration of the network parameters when the comprehensive loss function E meets the threshold requirement.
The first is the target score, the YoLo-v3 algorithm divides the image into grids of S x S, each grid possibly corresponding to the center of the predicted target, predicts B bounding boxes in each grid, detects the position of the object by calculating the resulting score of each bounding box, the score of the bounding box is calculated by the following formula:
Figure BDA0003376153960000041
Figure BDA0003376153960000042
is the fraction, P, of the jth bounding box in the ith celli,j(object) represents the probability that the detection object is located in the jth bounding box in the ith bin,
Figure BDA0003376153960000043
representing the intersection between the prediction box and the real prediction box of the object. Calculating the cross entropy of the real detection frame and the predicted target frame, and taking the cross entropy as a loss function update parameter, wherein the loss function can be calculated by the following formula:
Figure BDA0003376153960000044
wherein E is1Representing a first loss function, W, for parameter updatingi,jThe weight is represented by a weight that is,
Figure BDA0003376153960000051
represents the score of the true prediction box.
Another important loss function can be defined as;
Figure BDA0003376153960000052
wherein E is2Represents a coordinate loss function, σ (-) represents four coordinates tx、ty、tw、thThe corresponding function of the squared difference is,
Figure BDA0003376153960000053
corresponding to the coordinates of the real detection frame.
The multi-scale nature of YoLo-v3 is another important reason for choosing it to detect underwater objects, since the size of underwater objects varies with the depth of observation. The YoLo-v3 method provides three bounding boxes. The three bounding boxes correspond to three different receiving domains, and when the size of the input image is 224 × 224, the specific correspondence of the three dimensions is shown in the table
TABLE 1 peripheral box of YOLO-v 3.
Figure BDA0003376153960000054
The core algorithm of the cooperative underwater target detection system is to detect an underwater target by using a side scanning sonar image. Aiming at the characteristics of high resolution, high noise, target significance and the like of a side-scanning sonar image, a method for detecting an underwater target by using a Yolo-v3 network is provided, and a flow chart of the Yolo-v3 for detecting the underwater target is shown in fig. 2.
As shown in FIG. 2, Conv is a convolutional layer, and Conv S is an S-step convolutional layer. Res _ N is an interleaving network that repeats N residual components. For example Res _2 indicates that there are two residual components after the convolution operation. DBL is a volumetric meter that includes batch normalization and leakage Relu activation functions. DBL _ S is a convolution operation with S as a step size. DBL m denotes m DBL repeat junctions. Up indicates upsampling a feature map, and Concat indicates superposition fusion of different feature maps.
The Yolo-3 network consists of 24 convolutional layers and 2 fully-connected layers. The input to the network is a side scan sonar image with a resolution of 416x416x 3. The output dimension is 4S × S × [ B × (4+1) + C ], where S is the number of image blocks, B is the number of rectangular candidate blocks corresponding to each image block, and C is the total number of detected classes. The loss function of the network is the same as the loss function of the reference.
2. All side-scan sonar images used in the experiment were acquired in a real environment. Since the fields of view of the side-scan sonar images are typically located on both sides of the carrier, the data set accordingly contains multiple pairs of left and right sonar images. There are five types of objects in the database, some examples of which are shown in FIG. 3.
3. In order to verify the effectiveness of the proposed underwater target detection system, the method follows a YoLo series algorithm and adopts MAP as an evaluation standard. The calculation formula of MAP is as follows:
Figure BDA0003376153960000061
where N is the number of detection classes, and in this task N is 4. The MAP is obtained by averaging the APs of the four subjects. The AP is calculated by accuracy and recall at different thresholds. Depending on the task, we set the confidence level thresholds to 0.5,0.55 and 0.6. The accuracy was calculated from P-TP/(TP + FP), the percentage of true positives in all images identified as positive. The recall ratio was calculated as R ═ TP (TP + FN), the percentage of true positives in images of this category.
4. To verify the effectiveness of the underwater object detection method based on the YoLo-v3 network proposed herein, we tested on underwater data collected in real environment. In this experiment, 6000 training samples and 1000 test samples were used, and the visualization of partial test results is shown in fig. 4.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (9)

1. Underwater sonar side-scan image small target detection method based on YoLo-v3 algorithm is characterized by comprising the following steps:
s1, collecting underwater sonar side-scan images with targets in advance, labeling the targets to be detected in the sonar images, and establishing a sonar side-scan image set with target labels;
s2, training by combining an underwater sonar side-scan image based on a YoLo-v3 method, and establishing a target detection network for realizing complementation and accurate detection of optical and acoustic underwater acoustic target detection;
and S3, acquiring an underwater sonar side-scan image with the target in real time, inputting the YoLo-v3 network structure for identification and detection, and acquiring a prediction frame and coordinates of the center of the underwater target.
2. The method for detecting the small target in the underwater sonar side-scan image based on the YoLo-v3 algorithm according to claim 1, wherein the labeling is fuzzy labeling of the target in the underwater sonar side-scan image by a rectangular frame.
3. The method for detecting the small targets in the underwater sonar side-scan image based on the YoLo-v3 algorithm according to claim 1, wherein the training based on the YoLo-v3 method combined with the underwater sonar side-scan image comprises the following steps: training parameters of a Darknet-53 network in a YoLo-v3 network by adopting a fuzzy labeled underwater sonar side-scan image data set, and adjusting the parameters of the Darknet-53 network according to loss function back propagation.
4. The underwater sonar side-scan image small-target detection method based on the YoLo-v3 algorithm as claimed in claim 1, wherein the comprehensive loss function E-E1+E2Wherein E is1A cross entropy loss function representing the true detection box and the predicted target box, E2Representing a coordinate loss function; and stopping the updating iteration of the network parameters when the comprehensive loss function E meets the threshold requirement.
5. The method for detecting the small target of the underwater sonar side-scan image based on the YoLo-v3 algorithm according to claim 3 or 4, wherein the training of the parameters of the Darknet-53 network in the YoLo-v3 algorithm by the fuzzy labeled underwater sonar side-scan image data set comprises the following steps: dividing the underwater sonar side-scan image by using S grids, predicting B boundary frames in each grid, detecting the position center of an object by calculating the fraction of each boundary frame, and calculating the fraction of each boundary frame according to the following formula:
Figure FDA0003376153950000011
wherein the content of the first and second substances,
Figure FDA0003376153950000012
is the score, P, of the jth bounding box in the ith latticei,j(object) represents the probability that the detected object is located in the jth bounding box in the ith mesh,
Figure FDA0003376153950000013
and representing the intersection and union ratio between the prediction frame and the object real prediction frame, wherein the intersection and union ratio is the ratio of intersection to union.
6. The method for detecting the small target of the underwater sonar side-scan image based on the YoLo-v3 algorithm according to claim 5, wherein the underwater sonar side-scan image is segmented by S grids to obtain S grid images and a real prediction frame of any grid image.
7. The method for detecting the underwater sonar side-scan image small target based on the YoLo-v3 algorithm according to claim 5, wherein the cross entropy of the real detection frame and the predicted target frame is calculated and then is used as a loss function to propagate backwards to update the network parameters, and the loss function can be calculated by the following formula:
Figure FDA0003376153950000021
wherein E is1Representing a first loss function, W, for parameter updatingi,jThe weight is represented by a weight that is,
Figure FDA0003376153950000022
the score of the jth real prediction box in the ith mesh is represented.
8. The method for detecting the underwater sonar side-scan image small target based on the YoLo-v3 algorithm according to claim 3 or 4, wherein the network parameters are updated through a coordinate loss function, and the method for updating the network parameters through the coordinate loss function is that
Figure FDA0003376153950000023
Wherein E is2Represents a coordinate loss function, σ (-) represents four coordinates t of the jth prediction box in the ith meshx、ty、tw、thThe corresponding function is a function of the number of the functions,
Figure FDA0003376153950000024
the coordinates of the jth corresponding real detection frame in the ith grid are obtained.
9. The underwater sonar side-scan image small target detection method based on the YoLo-v3 algorithm according to claim 1, wherein the coordinates of the target center are as follows: when the updating iteration is stopped, the center of the jth bounding box in the ith grid is taken as the center of the current underwater sound target as an output result; the prediction frame of the underwater target center is as follows; the four coordinates of the bounding box at this time are labeled as output results.
CN202111417536.XA 2021-11-26 2021-11-26 Underwater sonar side-scan image small target detection method based on Yolo-v3 algorithm Pending CN114155428A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111417536.XA CN114155428A (en) 2021-11-26 2021-11-26 Underwater sonar side-scan image small target detection method based on Yolo-v3 algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111417536.XA CN114155428A (en) 2021-11-26 2021-11-26 Underwater sonar side-scan image small target detection method based on Yolo-v3 algorithm

Publications (1)

Publication Number Publication Date
CN114155428A true CN114155428A (en) 2022-03-08

Family

ID=80458004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111417536.XA Pending CN114155428A (en) 2021-11-26 2021-11-26 Underwater sonar side-scan image small target detection method based on Yolo-v3 algorithm

Country Status (1)

Country Link
CN (1) CN114155428A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117075092A (en) * 2023-09-05 2023-11-17 海底鹰深海科技股份有限公司 Underwater sonar side-scan image small target detection method based on forest algorithm

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020177432A1 (en) * 2019-03-07 2020-09-10 中国科学院自动化研究所 Multi-tag object detection method and system based on target detection network, and apparatuses
WO2020206861A1 (en) * 2019-04-08 2020-10-15 江西理工大学 Yolo v3-based detection method for key object at transportation junction
CN112288008A (en) * 2020-10-29 2021-01-29 四川九洲电器集团有限责任公司 Mosaic multispectral image disguised target detection method based on deep learning
CN113486764A (en) * 2021-06-30 2021-10-08 中南大学 Pothole detection method based on improved YOLOv3
CN113591717A (en) * 2021-07-31 2021-11-02 浙江工业大学 Non-motor vehicle helmet wearing detection method based on improved YOLOv3 algorithm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020177432A1 (en) * 2019-03-07 2020-09-10 中国科学院自动化研究所 Multi-tag object detection method and system based on target detection network, and apparatuses
WO2020206861A1 (en) * 2019-04-08 2020-10-15 江西理工大学 Yolo v3-based detection method for key object at transportation junction
CN112288008A (en) * 2020-10-29 2021-01-29 四川九洲电器集团有限责任公司 Mosaic multispectral image disguised target detection method based on deep learning
CN113486764A (en) * 2021-06-30 2021-10-08 中南大学 Pothole detection method based on improved YOLOv3
CN113591717A (en) * 2021-07-31 2021-11-02 浙江工业大学 Non-motor vehicle helmet wearing detection method based on improved YOLOv3 algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
于淼: ""基于深度学习的侧扫声呐图像目标检测方法"", 中国优秀硕士学位论文全文数据库 工程科技II辑, 15 May 2021 (2021-05-15), pages 17 - 28 *
武玉伟等: "《深度学习基础与应用》", 30 November 2020, pages: 245 - 249 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117075092A (en) * 2023-09-05 2023-11-17 海底鹰深海科技股份有限公司 Underwater sonar side-scan image small target detection method based on forest algorithm

Similar Documents

Publication Publication Date Title
CN111222574B (en) Ship and civil ship target detection and classification method based on multi-model decision-level fusion
CN113436169B (en) Industrial equipment surface crack detection method and system based on semi-supervised semantic segmentation
CN113469177A (en) Drainage pipeline defect detection method and system based on deep learning
CN112130132A (en) Underground pipeline detection method and system based on ground penetrating radar and deep learning
CN114266977B (en) Multi-AUV underwater target identification method based on super-resolution selectable network
CN112052817A (en) Improved YOLOv3 model side-scan sonar sunken ship target automatic identification method based on transfer learning
CN111598098A (en) Water gauge water line detection and effectiveness identification method based on full convolution neural network
CN116485709A (en) Bridge concrete crack detection method based on YOLOv5 improved algorithm
CN111860106A (en) Unsupervised bridge crack identification method
CN111783616B (en) Nondestructive testing method based on data-driven self-learning
CN112465057A (en) Target detection and identification method based on deep convolutional neural network
CN113379737A (en) Intelligent pipeline defect detection method based on image processing and deep learning and application
CN114241332A (en) Deep learning-based solid waste field identification method and device and storage medium
Wang et al. Underwater Object Detection based on YOLO-v3 network
CN112580542A (en) Steel bar counting method based on target detection
CN114155428A (en) Underwater sonar side-scan image small target detection method based on Yolo-v3 algorithm
CN114926400A (en) Fan blade defect detection method based on improved YOLOv5
CN110533650A (en) A kind of AUV submarine pipeline detecting and tracking method of view-based access control model
CN114066795A (en) DF-SAS high-low frequency sonar image fine registration fusion method
CN114067103A (en) Intelligent pipeline third party damage identification method based on YOLOv3
Chen et al. Deep learning based underground sewer defect classification using a modified RegNet
CN114882375A (en) Intelligent identification method and device for tailing pond
CN113724233A (en) Transformer equipment appearance image defect detection method based on fusion data generation and transfer learning technology
Han et al. Damage detection of quayside crane structure based on improved faster R-CNN
CN112485329A (en) Method, device and system for detecting sewage draining outlet based on combination of thermal imaging and ultrasound

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination