CN112417980A - Single-stage underwater biological target detection method based on feature enhancement and refinement - Google Patents

Single-stage underwater biological target detection method based on feature enhancement and refinement Download PDF

Info

Publication number
CN112417980A
CN112417980A CN202011169614.4A CN202011169614A CN112417980A CN 112417980 A CN112417980 A CN 112417980A CN 202011169614 A CN202011169614 A CN 202011169614A CN 112417980 A CN112417980 A CN 112417980A
Authority
CN
China
Prior art keywords
convolution
feature
training
data
underwater
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011169614.4A
Other languages
Chinese (zh)
Inventor
范保杰
陈炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202011169614.4A priority Critical patent/CN112417980A/en
Publication of CN112417980A publication Critical patent/CN112417980A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a single-stage underwater biological target detection method based on feature enhancement and refinement, which comprises the following steps: acquiring training data, acquiring an underwater biological training characteristic sample, and constructing an underwater target detection data set; data preprocessing, namely, manually labeling a data set, and dividing the data into a training set and a test set; constructing a neural network, which comprises a feature extraction backbone network, a feature enhancement module and a feature refinement module; training a neural network, inputting training set data into the constructed neural network for training to obtain a weight model for detecting the underwater biological target; and predicting a result, constructing a detector through the trained weight model, and performing regression and classification on the test set data to obtain a detection result. The multi-scale context feature expression capability of the network is improved through the feature enhancement module, and the problems of underwater biological fuzziness and large scale change are solved. And the characteristics are refined through a characteristic refining module, so that the characteristics are aligned with the anchor frame, and the problem of unbalanced samples is solved.

Description

Single-stage underwater biological target detection method based on feature enhancement and refinement
Technical Field
The invention relates to a single-stage target detection algorithm, in particular to a single-stage underwater biological target detection method based on feature enhancement and refinement.
Background
The underwater robot can be used for performing important tasks such as seabed detection, hull overhaul, ocean investigation and the like, and related research and development receives attention. The most direct commercial value of underwater robots is reflected in the industry, namely the marine industry. At present, marine products such as sea cucumbers and sea urchins are pursued by many people, but manual fishing has high requirements on the capability of divers, and long-term underwater operation is harmful to the health of the divers. Therefore, the underwater robot fishing has practical significance for replacing the manual fishing. The capture of underwater creatures by underwater robots requires support by target detection techniques. Currently, underwater target detection mainly relies on three typical sensors, namely sonar, laser and camera. The sonar sensors are sensitive to geometric information and provide information about underwater scenes even in low visibility environments. However, the data obtained by the sonar only presents the distance differences between the scanning points. This type of sensor may miss other factors such as visual features. Underwater laser scanners use light for accurate modeling of the propagation in water, and indeed such sensors can provide high performance in terms of resolution and accuracy of the obtained 3D image. However, underwater laser scanners are very expensive and visually disturbed by the absorption of aqueous media and noise. In contrast to sonar, cameras can provide more types of visual information with high spatial and temporal resolution. Salient objects can be identified by color, texture and contour visual features. Therefore, the underwater target detection based on vision has high cost performance. At the algorithm level, although the general target detection is well developed and can be easily deployed and used in the underwater environment. However, due to the fact that the underwater environment is complex and changeable, and the interference factors such as suspended matters are numerous, the problems of low contrast, distorted texture, uneven illumination and the like often occur to the image acquired by the underwater camera, and the detection effect of the general target detection algorithm with weak robustness directly applied to the underwater environment is poor. In addition, an underwater large-scale training data set in a real environment is still lacked at present, and the development of underwater robot technology is restricted by the above varieties.
Disclosure of Invention
The invention aims to improve a general target detection algorithm and provides a single-stage underwater biological target detection algorithm based on feature enhancement and refinement.
The technical scheme is as follows: the invention provides a single-stage underwater biological target detection method based on feature enhancement and refinement, which comprises the following steps of:
step (1): acquiring training data, acquiring an underwater biological training characteristic sample through an underwater camera, and constructing a required underwater target detection data set;
step (2): data preprocessing, namely, manually labeling the collected underwater data set, dividing the collected underwater data set into 4 classes, and dividing the collected data into a training set and a test set according to the proportion of 7: 3;
and (3): constructing a neural network, wherein the neural network comprises a feature extraction backbone network, a feature enhancement module and a feature refinement module, the feature extraction network is formed by compositely connecting feature extraction backbones, the feature enhancement module is formed by combining convolution branches, and the feature refinement module is integrated on a prediction branch and uses a deformable convolution kernel;
and (4): training a neural network, inputting the training set data into the constructed neural network for training to obtain a weight model for detecting the underwater biological target;
and (5): and predicting a result, namely constructing a detector through the trained weight model, and performing regression and classification on the data of the test set to obtain a detection result of the test set.
The algorithm comprises the following steps:
step (1): acquiring training data, and acquiring an underwater biological training characteristic sample through an underwater camera;
step (2): data preprocessing, namely dividing the acquired sample data into a training set and a test set according to the ratio of 7: 3, and carrying out manual labeling;
and (3): constructing a neural network, which comprises a feature extraction backbone, a feature enhancement module and a feature refinement module;
and (4): training a neural network, inputting the training sample into the constructed neural network for training to obtain an underwater biological target detection model;
and (5): predicting the result, namely building a detector through a training model, and predicting the result of the divided test set data;
further, in step (2), when the data set is labeled, the labeled data includes 4 classes: sea urchin, sea cucumber, starfish and shell, and generates a labeling file containing the file name, the target category and the coordinate (x) of the upper left corner of the target true value framemin,ymax) And the coordinates (x) of the lower right corner of the target true value framemax,ymin)。
Further, in the step (3), the feature extraction backbone is formed by compositely connecting basic feature extraction backbones VGG16 and ResNet 50.
Further, the feature enhancement module in step (3) includes 3 convolution branches, where a first convolution branch is composed of three consecutive convolution layers with convolution kernel sizes of 1, 5, and 3, respectively, and a void convolution with an expansion rate of 5 is introduced into a convolution layer with a convolution kernel size of 3; the second convolution branch consists of three continuous convolution layers with convolution kernels of 1, 3 and 3, wherein the convolution layer with the convolution kernel of 3 introduces a cavity convolution with an expansion rate of 3; the third convolution branch comprises convolution with convolution kernel size of 1 and cavity convolution with convolution kernel size of 3 and expansion rate of 1, the results of the three convolution branches are subjected to feature fusion and dropout with the original features after the number of channels is adjusted, the weight is set to 0.1, finally, the enhanced feature result is output by using a ReLU activation function, and the feature enhancement process can be expressed as the following formula:
Figure BDA0002744649660000022
wherein, XinIndicates the input characteristics, [ br ]1,5,3,br1,3,3,br1,3]Respectively representing three different convolutional layer branches,
Figure BDA0002744649660000021
representing the feature fusion and adjusting the channel, k is the weight of dropout (the value is 0.1), f represents the ReLU activation function, and XoutIndicating the enhanced features.
Further, the feature refinement module in step (3) is integrated on 6 prediction branches, each branch includes 6 continuous convolution layers with deformable convolution kernels, the feature refinement module performs coarse and fine regression and classification on the features twice, the first classification is performed, namely, the background before the target is distinguished and the target position is roughly regressed, the second classification is performed, the frame offset is learned through deformable convolution, and the target position is finally refined.
Further, when the neural network is trained in the step (4), the learning rate of the neural network training adopts a norm Up strategy, and the first six rounds of training are carried out at 10-6And 4X 10-3The learning rate is dynamically selected, then the learning rate approaches 0.002, and the optimizer in the training stage selects SGD for a total of 250 rounds of training.
Further, in the step (5), when the detector regresses and classifies the test set data, the post-processing is performed by using NMS non-maximum suppression, and the frame with the IOU threshold value less than 0.5 is suppressed.
In the above technical scheme:
first, a neural network is constructed. Problems such as blurring, uneven illumination intensity, color shift and large scale change exist for the acquired underwater biological image. The feature extraction network with the composite connection structure and the feature enhancement module are provided to enhance the quality of extracted features, so that the relevance of model context information is improved. Furthermore, a feature refining module is provided, the module comprises two times of regression and classification, the problem of sample imbalance of the single-stage detector can be effectively solved, and the module further refines the position of the anchor frame, so that the model can predict a more accurate result. Secondly, the underwater target detector is constructed by training the obtained weights and used for predicting the data of the test set. Finally, experimental results on the common target detection dataset PASCAL VOC and the underwater dataset show that
Has the advantages that: the algorithm is an end-to-end underwater target detection algorithm, and the method improves the multi-scale context feature expression capability of the network through a feature enhancement module, so as to solve the problems of fuzzy underwater organisms and large scale change. In addition, the characteristics are refined through the characteristic refining module, so that the characteristics are aligned with the anchor frame, and the problem of unbalanced samples is solved to a certain extent. The method provided by the invention is not only suitable for a general target detection task, but also can be perfectly suitable for an underwater target detection task.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram of a feature extraction network architecture of the present invention;
FIG. 3 is a block diagram of a feature enhancement module of the present invention;
fig. 4 is an overall framework diagram of the present invention.
Detailed Description
The single-stage underwater biological target detection method based on feature enhancement and refinement of the embodiment has a specific flow as shown in fig. 1, and comprises the following steps:
step (1): and acquiring a training characteristic data set, and acquiring picture data of underwater organisms offshore by using an underwater camera.
Step (2): and (3) preprocessing data, manually labeling the acquired underwater biological data set by using a data labeling tool Label image, and using a VOC data labeling format, wherein each picture corresponds to one Label file. When data is labeled, the labeled data includes four classes: sea urchin, sea cucumber, starfish and seashell. The generated labeling file comprises a file name, a target category and a coordinate (x) of the upper left corner of a target true value framemin,ymax) And the coordinates (x) of the lower right corner of the target true value framemax,ymin). After labeling is completed, the data is divided into training sets and test sets in a ratio of about 7: 3.
And (3): a neural network is constructed, and the whole neural network has three parts: the system comprises a feature extraction backbone network, a feature enhancement module and a feature refinement module. The specific method comprises the following steps:
a feature extraction backbone network: the feature extraction network used in the invention is formed by compositely connecting VGG16 and ResNet 50. Specifically, as shown in fig. 2, for a picture with an input size of 300 × 300, the picture is input into the main feature extraction network VGG16 and the co-feature extraction network ResNet50, and then feature output layer selection features with output sizes of 150 × 150, 75 × 75, 38 × 38, and 19 × 19 are output in the VGG16, where the output feature size is exactly equal to the sizes of 4 output layers of ResNet 50. And establishing composite connection between output layers with the same size, namely performing feature fusion, so that the fused features have stronger context characterization capability. After the fusion is completed, the channel size is adjusted through the convolution layer of 1 × 1 and the processing is performed through the BN layer, so that the model convergence is accelerated and the overfitting is prevented. The process of composite joining can be expressed as the following equation:
Figure BDA0002744649660000041
F1,F2∈(Size{150×150,75×75,38×38,19×19})
wherein, F1,F2Representing feature maps selected on a principal feature extraction network and a co-feature extraction network,
Figure BDA0002744649660000042
representing the feature fusion process, and gamma represents 1 × 1 convolutional layer for channel sizing. FαThe characteristic result after composite connection is shown.
A feature enhancement module: the feature enhancement module comprises three convolution branches as shown in fig. 3, the first of which consists of three successive convolution layers with convolution kernel sizes of 1, 5 and 3, respectively. In which a convolutional layer with a convolution kernel size of 3 introduces a void convolution with an expansion ratio of 5. The second convolution branch consists of three successive convolution layers with convolution kernel sizes of 1, 3 and 3. The convolution layer with convolution kernel of 3 introduces a hole convolution with expansion rate of 3, and the third convolution branch comprises a convolution with convolution kernel of 1 and a hole convolution with convolution kernel of 3 and expansion rate of 1. And performing feature fusion on the results of the three convolution branches, adjusting the number of channels, and performing dropout on the results and the original features, wherein the weight is set to be 0.1. And finally, outputting the enhanced feature result by using the ReLU activation function. The feature enhancement process can be expressed as the following equation:
Figure BDA0002744649660000043
wherein, XinIndicates the input characteristics, [ br ]1,5,3,br1,3,3,br1,3]Respectively representing three different convolutional layer branches,
Figure BDA0002744649660000044
representing the feature fusion and adjusting the channel, k is the weight of dropout (the value is 0.1), f represents the ReLU activation function, and XoutIndicating the enhanced features.
A characteristic refining module: the feature refinement module is integrated on 6 prediction branches, each branch containing 6 successive convolutional layers with deformable convolutional kernels. Specifically, the feature refinement module is divided into two processes: a feature preprocessing process and a refinement process. In the feature preprocessing process, the network carries out first classification and regression on features, in the process, the network screens target foreground and background information through the second classification, and carries out first regression on parts belonging to the foreground to preliminarily position the target position. The thinning process is based on the result obtained in the preprocessing process, firstly, the maximum value pooling on the channel domain is carried out on the original features, the Sigmoid activation function processing is carried out, and the obtained result and the original features are subjected to element-by-element multiplication on the pixel level and then are fused. The process can be expressed as the following equation:
Figure BDA0002744649660000051
wherein, F0Representing original characteristics, chi representing maximum pooling, chi epsilon c representing on the channel domain, S representing Sigmoid activation function,
Figure BDA0002744649660000052
which means that the multiplication is performed element by element,
Figure BDA0002744649660000053
representing feature fusion, FinrefRepresenting the input features of the refinement procedure.
The first regression obtains four parameter values respectivelyAnd Δ x, Δ y Δ h and Δ w, the first two representing the offset of the center point of the anchor frame, and the second two representing the offset of the size of the anchor frame. And when the second classification regression is performed, the network performs multi-classification instead of two-classification, so that the classification task of the underwater target is realized. And selecting the spatial offset deltax and deltay and refining the input characteristic FinrefAnd inputting the data into a deformable convolution network with 6 layers together for thinning to finally obtain a thinned position result.
And (4): training a neural network, wherein the learning rate of the neural network training adopts a Warm Up strategy, and the first six rounds of training are carried out at 10-6And 4X 10-3The learning rate is dynamically selected, and then gradually approaches 0.002. The optimizer in the training phase selects SGD, the training Loss is the sum of the classification Loss and the regression Loss, the classification Loss uses cross entropy Loss or Focal Loss, the regression Loss uses SmoothL1 Loss, 32 Batch training are selected in each round, and the total training is 250 rounds.
And (5): predicting the result, constructing a detector (capable of realizing detection function) by using the neural network of the step (4) of the claim, and loading the weight model obtained by training to predict the data of the test set. When the detector predicts the test set data, the post-processing is carried out by using NMS non-maximum value inhibition, and the frame with the IOU threshold value smaller than 0.5 is inhibited.
In order to evaluate the algorithm, the rationality of the algorithm was verified on a generic target detection data set PASCAL VOC, while on an underwater biological data set, a precision comparison was made with an excellent single-stage algorithm.
The experimental results are as follows:
table 1 statistics of the maps values for the various algorithms on the PASCAL VOC data set.
TABLE 1
Figure BDA0002744649660000061
Table 2 statistics of the maps values of the present algorithm compared to the excellent single-stage algorithm RFBNet and its variants on the underwater biodata set.
TABLE 2
Figure BDA0002744649660000062

Claims (7)

1. A single-stage underwater biological target detection method based on feature enhancement and refinement is characterized by comprising the following steps:
step (1): acquiring training data, acquiring an underwater biological training characteristic sample through an underwater camera, and constructing a required underwater target detection data set;
step (2): data preprocessing, namely, manually labeling the collected underwater data set, dividing the collected underwater data set into 4 classes, and dividing the collected data into a training set and a test set according to the proportion of 7: 3;
and (3): constructing a neural network, wherein the neural network comprises a feature extraction backbone network, a feature enhancement module and a feature refinement module, the feature extraction network is formed by compositely connecting feature extraction backbones, the feature enhancement module is formed by combining convolution branches, and the feature refinement module is integrated on a prediction branch and uses a deformable convolution kernel;
and (4): training a neural network, inputting the training set data into the constructed neural network for training to obtain a weight model for detecting the underwater biological target;
and (5): and predicting a result, namely constructing a detector through the trained weight model, and performing regression and classification on the data of the test set to obtain a detection result of the test set.
2. The single-stage underwater biological target detection method based on feature enhancement and refinement as claimed in claim 1, wherein: in the step (2), when the data set is labeled, the labeled data includes 4 classes: sea urchin, sea cucumber, starfish and shell, and generates a labeling file containing the file name, the target category and the coordinate (x) of the upper left corner of the target true value framemin,ymax) And the coordinates (x) of the lower right corner of the target true value framemax,ymin)。
3. The single-stage underwater biological target detection method based on feature enhancement and refinement as claimed in claim 1, wherein: and (4) in the step (3), the feature extraction backbone is formed by compositely connecting basic feature extraction backbones VGG16 and ResNet 50.
4. The single-stage underwater biological target detection method based on feature enhancement and refinement as claimed in claim 1, wherein: the feature enhancement module in the step (3) comprises 3 convolution branches, wherein the first convolution branch consists of three continuous convolution layers with convolution kernels of 1, 5 and 3 respectively, and a void convolution with an expansion rate of 5 is introduced into the convolution layer with the convolution kernel of 3; the second convolution branch consists of three continuous convolution layers with convolution kernels of 1, 3 and 3, wherein the convolution layer with the convolution kernel of 3 introduces a cavity convolution with an expansion rate of 3; the third convolution branch comprises convolution with convolution kernel size of 1 and cavity convolution with convolution kernel size of 3 and expansion rate of 1, the results of the three convolution branches are subjected to feature fusion and dropout with the original features after the number of channels is adjusted, the weight is set to 0.1, finally, the enhanced feature result is output by using a ReLU activation function, and the feature enhancement process can be expressed as the following formula:
Figure FDA0002744649650000011
wherein, XinIndicates the input characteristics, [ br ]1,5,3,br1,3,3,br1,3]Respectively representing three different convolutional layer branches,
Figure FDA0002744649650000012
representing the feature fusion and adjusting the channel, k is the weight of dropout (the value is 0.1), f represents the ReLU activation function, and XoutIndicating the enhanced features.
5. The single-stage underwater biological target detection method based on feature enhancement and refinement as claimed in claim 1, wherein: and (3) integrating a feature refining module on 6 prediction branches, wherein each branch comprises 6 continuous convolution layers with deformable convolution kernels, performing coarse and fine regression and classification on the features by the feature refining module twice, performing secondary classification for the first time, namely distinguishing a background in front of a target and roughly regressing the position of the target, performing multi-classification tasks for the second time, and finally refining the position of the target by learning frame offset through deformable convolution.
6. The single-stage underwater biological target detection method based on feature enhancement and refinement as claimed in claim 1, wherein: when the neural network is trained in the step (4), the learning rate of the neural network training adopts a Warm Up strategy, and the first six rounds of training are carried out at 10-6And 4X 10-3The learning rate is dynamically selected, then the learning rate approaches 0.002, and the optimizer in the training stage selects SGD for a total of 250 rounds of training.
7. The single-stage underwater biological target detection method based on feature enhancement and refinement as claimed in claim 1, wherein: in the step (5), when the detector regresses and classifies the test set data, the post-processing is carried out by using NMS non-maximum value inhibition, and the frame with the IOU threshold value less than 0.5 is inhibited.
CN202011169614.4A 2020-10-27 2020-10-27 Single-stage underwater biological target detection method based on feature enhancement and refinement Pending CN112417980A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011169614.4A CN112417980A (en) 2020-10-27 2020-10-27 Single-stage underwater biological target detection method based on feature enhancement and refinement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011169614.4A CN112417980A (en) 2020-10-27 2020-10-27 Single-stage underwater biological target detection method based on feature enhancement and refinement

Publications (1)

Publication Number Publication Date
CN112417980A true CN112417980A (en) 2021-02-26

Family

ID=74841978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011169614.4A Pending CN112417980A (en) 2020-10-27 2020-10-27 Single-stage underwater biological target detection method based on feature enhancement and refinement

Country Status (1)

Country Link
CN (1) CN112417980A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420770A (en) * 2021-06-21 2021-09-21 梅卡曼德(北京)机器人科技有限公司 Image data processing method, image data processing device, electronic equipment and storage medium
CN114092793A (en) * 2021-11-12 2022-02-25 杭州电子科技大学 End-to-end biological target detection method suitable for complex underwater environment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503112A (en) * 2019-08-27 2019-11-26 电子科技大学 A kind of small target deteection of Enhanced feature study and recognition methods
CN111209952A (en) * 2020-01-03 2020-05-29 西安工业大学 Underwater target detection method based on improved SSD and transfer learning
CN111310718A (en) * 2020-03-09 2020-06-19 成都川大科鸿新技术研究所 High-accuracy detection and comparison method for face-shielding image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503112A (en) * 2019-08-27 2019-11-26 电子科技大学 A kind of small target deteection of Enhanced feature study and recognition methods
CN111209952A (en) * 2020-01-03 2020-05-29 西安工业大学 Underwater target detection method based on improved SSD and transfer learning
CN111310718A (en) * 2020-03-09 2020-06-19 成都川大科鸿新技术研究所 High-accuracy detection and comparison method for face-shielding image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BAOJIE FAN 等: "Dual Refinement Underwater Object Detection Network", 《COMPUTER VISION–ECCV 2020》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420770A (en) * 2021-06-21 2021-09-21 梅卡曼德(北京)机器人科技有限公司 Image data processing method, image data processing device, electronic equipment and storage medium
CN113420770B (en) * 2021-06-21 2024-06-21 梅卡曼德(北京)机器人科技有限公司 Image data processing method, device, electronic equipment and storage medium
CN114092793A (en) * 2021-11-12 2022-02-25 杭州电子科技大学 End-to-end biological target detection method suitable for complex underwater environment
CN114092793B (en) * 2021-11-12 2024-05-17 杭州电子科技大学 End-to-end biological target detection method suitable for complex underwater environment

Similar Documents

Publication Publication Date Title
CN111310862B (en) Image enhancement-based deep neural network license plate positioning method in complex environment
CN111325794B (en) Visual simultaneous localization and map construction method based on depth convolution self-encoder
CN111626993A (en) Image automatic detection counting method and system based on embedded FEFnet network
CN108510458B (en) Side-scan sonar image synthesis method based on deep learning method and non-parametric sampling
CN111310622A (en) Fish swarm target identification method for intelligent operation of underwater robot
CN114067197B (en) Pipeline defect identification and positioning method based on target detection and binocular vision
CN109060838B (en) Product surface scratch detection method based on machine vision
CN112417980A (en) Single-stage underwater biological target detection method based on feature enhancement and refinement
CN112634202A (en) Method, device and system for detecting behavior of polyculture fish shoal based on YOLOv3-Lite
CN112215861A (en) Football detection method and device, computer readable storage medium and robot
CN115393734A (en) SAR image ship contour extraction method based on fast R-CNN and CV model combined method
CN115423978A (en) Image laser data fusion method based on deep learning and used for building reconstruction
CN115775236A (en) Surface tiny defect visual detection method and system based on multi-scale feature fusion
CN114548253A (en) Digital twin model construction system based on image recognition and dynamic matching
CN113591592A (en) Overwater target identification method and device, terminal equipment and storage medium
Ge et al. Real-time object detection algorithm for Underwater Robots
CN116168328A (en) Thyroid nodule ultrasonic inspection system and method
CN113537397B (en) Target detection and image definition joint learning method based on multi-scale feature fusion
CN113763261B (en) Real-time detection method for far small target under sea fog weather condition
CN113901944B (en) Marine organism target detection method based on improved YOLO algorithm
CN115311544A (en) Underwater fish target detection method and device
CN115294433A (en) Object six-dimensional pose estimation method and system suitable for severe environment
CN113920087A (en) Micro component defect detection system and method based on deep learning
CN112925932A (en) High-definition underwater laser image processing system
CN112598738A (en) Figure positioning method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210226