CN116994151B - Marine ship target identification method based on SAR image and YOLOv s network - Google Patents

Marine ship target identification method based on SAR image and YOLOv s network Download PDF

Info

Publication number
CN116994151B
CN116994151B CN202310655171.7A CN202310655171A CN116994151B CN 116994151 B CN116994151 B CN 116994151B CN 202310655171 A CN202310655171 A CN 202310655171A CN 116994151 B CN116994151 B CN 116994151B
Authority
CN
China
Prior art keywords
yolov
training
network
examples
positive examples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310655171.7A
Other languages
Chinese (zh)
Other versions
CN116994151A (en
Inventor
尚文利
潘梓沛
揭海
曹忠
张梦
李淑琦
常志伟
时昊天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN202310655171.7A priority Critical patent/CN116994151B/en
Publication of CN116994151A publication Critical patent/CN116994151A/en
Application granted granted Critical
Publication of CN116994151B publication Critical patent/CN116994151B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a marine ship target recognition method based on SAR images and YOLOv s network, which divides a data set of a ship according to a certain proportion, then respectively vertically turns over images and labels in a training set, horizontally turns over, rotates 90 degrees clockwise and rotates 180 degrees clockwise to obtain more SAR images for training, then adds a CBAM attention mechanism in a ninth layer of a YOLOv s network back bone module, modifies a target frame regression loss function into EIOU, trains a model to improve generalization capability and recognition precision of the model, and improves the problems of low recognition precision and serious omission caused by a deep learning-based method.

Description

Marine ship target identification method based on SAR image and YOLOv s network
Technical Field
The invention relates to the technical field of image processing, in particular to a marine ship target identification method based on SAR images and YOLOv s network.
Background
With the development of deep learning, SAR image ship recognition based on deep learning has gradually become a mainstream method in the field of SAR image ship recognition. Deep learning-based SAR image ship recognition methods generally involve multiple steps including image preprocessing, feature extraction, target classification, and the like.
The data preprocessing of the SAR image comprises denoising, enhancement, filtering, segmentation and the like; in terms of feature extraction, currently mainstream feature extraction networks include convolutional Residual networks (ResNet), long Short-Term Memory networks (LSTM), recurrent neural networks (Recurrent Neural Network, RNN), neural networks (Convolutional Neural Network, CNN), and the like. Among them, CNN is the most widely used deep learning method, which can automatically learn image features, thereby achieving object recognition and classification. In terms of object classification, object recognition and classification are achieved mainly by classifiers. Common classifiers include support vector machines, multi-layer perceptrons, logistic regression, convolutional neural networks, and the like. The convolutional neural network has become a main method for ship SAR image target recognition based on deep learning, can automatically learn image features and classification rules, and has higher accuracy and robustness.
Under the common effort of vast researchers, SAR image ship target recognition based on a deep learning method has higher robustness and recognition rate, but the recognition accuracy of the SAR image ship target recognition cannot meet the actual requirements well, more missed detection problems still occur, and a larger lifting space exists, but along with the continuous development and application of the deep learning technology, more and more SAR image ship target recognition based on the deep learning method is applied to the SAR image ship target recognition, so that higher-level automatic monitoring and recognition are realized.
Disclosure of Invention
The invention aims at providing a marine ship target identification method based on SAR images, which adds CBAM attention mechanism in YOLOv s network, and simultaneously uses EIOU loss function to replace original target frame regression loss function, so that the problems of low identification precision and serious omission factor existing in the deep learning-based method can be improved.
The technical scheme of the invention is realized in the following way: the marine ship target identification method based on the SAR image and YOLOv s network comprises the following steps:
S1, dividing a data set;
S2, enhancing training set data;
s3, adding a CBAM attention mechanism into the YOLOv S network;
s4, modifying the regression loss function of the target frame to EIOU;
s5, training a model;
s6, testing the model.
Preferably, in S1, a tag format of the dataset related to the ship image is converted into a TXT text file in YOLO format. The dataset was then pressed 4: the ratio of 1 is randomly divided into a training set and a test set.
Preferably, in the step S2, the images and the labels in the training set are respectively turned vertically, turned horizontally, rotated clockwise by 90 degrees and rotated clockwise by 180 degrees, so as to obtain more SAR images for training.
Preferably, in the step S3, the location of the CBAM attentive mechanism is located at the ninth layer of the YOLOv S network backhaul module.
Preferably, in the step S4, the objective frame regression loss function CIOU and EIOU loss function in the YOLOv algorithm are replaced by EIOU loss functions as follows:
Wherein Intersection denotes an area of an intersection portion of the prediction frame and the real frame, and Union denotes an area of an intersection portion of the prediction frame and the real frame; dividing the intersection area by the union area to obtain an IOU value, wherein the value range of the IOU value is between 0 and 1; v represents the shape difference between two boxes; d represents the Euclidean distance between the center points of the two frames, r represents half the diagonal length of the two frames, namely the radius of the minimum closed area of the two frames, alpha is an adjustable parameter, and v represents the shape difference between the two frames; w_gt and h_gt represent the width and height of the real frame, respectively, and w and h represent the width and height of the predicted frame, respectively.
Preferably, in the step S5, the learning rate lr0 is set to 0.01, and the cosine annealing super-parameter lrf is set to 0.01; setting the IOU threshold of the labels and the Anchors to be 0.2; setting the input picture size to 640, the batch size to 64 and the epochs to 150, and training on the basis of YOLOv s pre-training model; the training data is a training set after data enhancement in S2.
Preferably, in the step S6, precision, recall, and average Precision mean value mAP are used as evaluation indexes of the model.
More preferably, the accuracy represents the proportion of the number of correctly classified positive examples to the number of samples classified as positive examples, namely the ratio of the number of correctly classified positive examples by the classifier to the number of samples classified as positive examples by the classifier, and the calculation formula is as follows:
Recall rate indicates the proportion of the number of correctly classified positive examples to the number of all positive examples, namely the ratio of the number of correctly classified positive examples to the number of all positive examples by the classifier, and the calculation formula is as follows:
the average precision mean value is an index for measuring the overall performance of the information retrieval system, is the average value of the average precision of all queries, and has the following calculation formula:
Wherein TP represents the true cases, i.e., the number of positive cases correctly classified; TN represents true negative examples, i.e., the number of negative examples that are correctly classified; FP represents a false positive example, i.e. the number of negative examples that are misclassified as positive examples; FN represents false negative examples, i.e. the number of positive examples that are misclassified as negative examples; AP is the area under the curve of recall and precision; q is the total number of queries and AP (Q) is the average accuracy of the Q-th query.
Compared with the prior art, the invention has the following advantages:
(1) This patent incorporates CBAM attention mechanism at YOLOv s network. The CBAM attention mechanism can adaptively learn the weights among channels, reduce the interference of redundant information and improve the generalization capability of the model; by weighting different channels in the feature map, key information is strengthened, the attention of the model to important features is improved, and the performance and accuracy of the model are further improved. Experiments show that the recognition accuracy of the model can be improved to a greater extent by adding YOLOv s into a CBAM attention mechanism.
(2) The EIOU loss function was used as the objective box regression loss function. EIOU compared with the original CIOU loss function of YOLOv s, more distance and angle information is introduced when the similarity between the bounding boxes is measured, so that the difference between the prediction bounding box and the real bounding box can be quantified more comprehensively and accurately, particularly when a rotating object is processed, and the performance of a target detection algorithm can be estimated more accurately. Experiments show that EIOU is more suitable for ship identification based on SAR images than CIOU, and EIOU can improve the performance of the model compared with CIOU.
Drawings
The invention will be further described with reference to the accompanying drawings, in which embodiments do not constitute any limitation of the invention, and other drawings can be obtained by one of ordinary skill in the art without inventive effort from the following drawings.
FIG. 1 is a flow chart of an implementation of the marine vessel target identification method based on SAR images and YOLOv s network;
FIG. 2 is a diagram of the original yolov model structure in the present invention;
FIG. 3 is a diagram of the model yolov after the addition of CBAM attentiveness mechanisms in the present invention;
FIG. 4 is a partial test result schematic of a preferred embodiment of the present invention;
FIG. 5 is a partial test result schematic of a preferred embodiment of the present invention.
Detailed Description
The method for identifying targets of marine vessels based on SAR images and YOLOv s network is described in further detail below in connection with specific embodiments, which are for comparison and explanation purposes only, and the invention is not limited to these embodiments.
As shown in fig. 1, the marine ship target identification method based on the SAR image and YOLOv s network specifically comprises the following steps:
S1, dividing a data set;
S2, enhancing training set data;
s3, adding a CBAM attention mechanism into the YOLOv S network;
s4, modifying the regression loss function of the target frame to EIOU;
s5, training a model;
s6, testing the model.
Preferably, in S1, a tag format of the dataset related to the ship image is converted into a TXT text file in YOLO format. The dataset was then pressed 4: the ratio of 1 is randomly divided into a training set and a test set.
In this embodiment, the experimental dataset used is the published SAR ship dataset SSDD dataset. The data set consists of 1160 SAR images and 2456 ship targets, is marked by students, has multiple polarization modes and resolutions, is used for marine and wharf ship scenes, and can effectively verify the robustness of the model. The experiment uses a coco format training model, and since SSDD data set labels are XML files in VOC format and are not applicable to the YOLO label format in coco format, the label format of the data set is converted into TXT text files in YOLO format before the experiment of the patent. SSDD data were then scaled 4:1 is randomly divided into a training set and a test set, wherein the training set has 784 pictures, and the test set has 197 pictures.
Preferably, in the step S2, the images and the labels in the training set are respectively turned vertically, turned horizontally, rotated clockwise by 90 degrees and rotated clockwise by 180 degrees, so as to obtain more SAR images for training.
Because the ship examples of the training set are fewer, the training set is expanded by using the data enhancement method, the images and the labels in the training set are respectively vertically turned over, horizontally turned over, rotated 90 degrees clockwise and rotated 180 degrees clockwise, so that 3920 SAR images for training are obtained, about 8000 ship examples are obtained, and the expanded training set training model is used, so that the over-fitting phenomenon is reduced, and the robustness and generalization capability of the model are improved.
Preferably, in the step S3, the location of the CBAM attentive mechanism is located at the ninth layer of the YOLOv S network backhaul module.
A YOLOv s network incorporating CBAM attentive mechanisms was used as a model-trained deep-learning neural network, with CBAM attentive mechanisms joining at the ninth layer of the YOLOv s network backup module. The YOLOv s network structure is shown in fig. 2, and the backhaul module after adding CBAM attention mechanism is shown in fig. 3.
Preferably, in the step S4, the objective frame regression loss function CIOU and EIOU loss function in the YOLOv algorithm are replaced by EIOU loss functions as follows:
Wherein Intersection denotes an area of an intersection portion of the prediction frame and the real frame, and Union denotes an area of an intersection portion of the prediction frame and the real frame; dividing the intersection area by the union area to obtain an IOU value, wherein the value range of the IOU value is between 0 and 1; v represents the shape difference between two boxes; d represents the Euclidean distance between the center points of the two frames, r represents half the diagonal length of the two frames, namely the radius of the minimum closed area of the two frames, alpha is an adjustable parameter, and v represents the shape difference between the two frames; w_gt and h_gt represent the width and height of the real frame, respectively, and w and h represent the width and height of the predicted frame, respectively.
In this embodiment, the framework used for the experiment is pytoch 1.13.1, the programming language used is python, and model training and testing is performed on the pycharm platform in the environment of NVIDIA GeForce RTX 3090gpu, cud 11.7.
Preferably, in the step S5, the learning rate lr0 is set to 0.01, and the cosine annealing super-parameter lrf is set to 0.01; setting the IOU threshold of the labels and the Anchors to be 0.2; setting the input picture size to 640, the batch size to 64 and the epochs to 150, and training on the basis of YOLOv s pre-training model; the training data is a training set after data enhancement in S2.
Preferably, in the step S6, precision, recall, and average Precision mean value mAP are used as evaluation indexes of the model.
More preferably, the accuracy represents the proportion of the number of correctly classified positive examples to the number of samples classified as positive examples, namely the ratio of the number of correctly classified positive examples by the classifier to the number of samples classified as positive examples by the classifier, and the calculation formula is as follows:
Recall rate indicates the proportion of the number of correctly classified positive examples to the number of all positive examples, namely the ratio of the number of correctly classified positive examples to the number of all positive examples by the classifier, and the calculation formula is as follows:
the average precision mean value is an index for measuring the overall performance of the information retrieval system, is the average value of the average precision of all queries, and has the following calculation formula:
Wherein TP represents the true cases, i.e., the number of positive cases correctly classified; TN represents true negative examples, i.e., the number of negative examples that are correctly classified; FP represents a false positive example, i.e. the number of negative examples that are misclassified as positive examples; FN represents false negative examples, i.e. the number of positive examples that are misclassified as negative examples; AP is the area under the curve of recall and precision; q is the total number of queries and AP (Q) is the average accuracy of the Q-th query.
The test data is the test set divided in step S1, and the test results are shown in table 1 below. The partial test results are schematically shown in fig. 4 and 5.
Table 1 test results
Method Precision Recall mAP(0.5)
YOLOv5s 0.850 0.93 0.937
The method of the patent 0.885 0.930 0.952
It can be seen that the YOLOv network joins CBAM attention mechanism, while the model trained to modify the objective box regression loss function to EIOU is all higher on the test set than the original network model Precision, recall, mAP. Therefore, the method provided by the patent can improve the performance of the model to a certain extent.
Finally, it should be noted that the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (4)

1. The marine ship target identification method based on the SAR image and YOLOv s network is characterized by comprising the following steps:
S1, dividing a data set;
S2, enhancing training set data;
s3, adding a CBAM attention mechanism into the YOLOv S network;
s4, modifying the regression loss function of the target frame to EIOU;
s5, training a model;
S6, testing a model;
In S4, the objective frame regression loss function CIOU, EIOU loss function in YOLOv algorithm is replaced with EIOU loss function as follows:
Wherein Intersection denotes an area of an intersection portion of the prediction frame and the real frame, and Union denotes an area of an intersection portion of the prediction frame and the real frame; dividing the intersection area by the union area to obtain an IOU value, wherein the value range of the IOU value is between 0 and 1; v represents the shape difference between two boxes; d represents the Euclidean distance between the center points of the two frames, r represents half the diagonal length of the two frames, namely the radius of the minimum closed area of the two frames, alpha is an adjustable parameter, and v represents the shape difference between the two frames; w_gt and h_gt represent the width and the height of the real frame respectively, and w and h represent the width and the height of the predicted frame respectively;
In the step S5, the learning rate lr0 is set to be 0.01, and the cosine annealing super-parameter lrf is set to be 0.01; setting the IOU threshold of the labels and the Anchors to be 0.2; setting the input picture size to 640, the batch size to 64 and the epochs to 150, and training on the basis of YOLOv s pre-training model; the training data is a training set after the data enhancement of S2;
In the step S6, precision, recall and average Precision mean mAP are used as evaluation indexes of the model;
The accuracy represents the proportion of the number of correctly classified positive examples to the number of samples classified as positive examples, namely the ratio of the number of correctly classified positive examples by the classifier to the number of samples classified as positive examples by the classifier, and the calculation formula is as follows:
Recall rate indicates the proportion of the number of correctly classified positive examples to the number of all positive examples, namely the ratio of the number of correctly classified positive examples to the number of all positive examples by the classifier, and the calculation formula is as follows:
the average precision mean value is an index for measuring the overall performance of the information retrieval system, is the average value of the average precision of all queries, and has the following calculation formula:
Wherein TP represents the true cases, i.e., the number of positive cases correctly classified; TN represents true negative examples, i.e., the number of negative examples that are correctly classified; FP represents a false positive example, i.e. the number of negative examples that are misclassified as positive examples; FN represents false negative examples, i.e. the number of positive examples that are misclassified as negative examples; AP is the area under the curve of recall and precision; q is the total number of queries and AP (Q) is the average accuracy of the Q-th query.
2. The marine vessel target identification method based on SAR images and YOLOv S network according to claim 1, wherein in S1, the tag format of the dataset on the vessel image is converted into TXT text file in YOLO format; the dataset was then pressed 4: the ratio of 1 is randomly divided into a training set and a test set.
3. The method for identifying the marine vessel target based on the SAR image and YOLOv S network according to claim 1, wherein in the step S2, the image and the tag in the training set are respectively vertically turned over, horizontally turned over, rotated clockwise by 90 degrees and rotated clockwise by 180 degrees, so as to obtain more SAR images for training.
4. The method for identifying a marine vessel target based on a SAR image and YOLOv S network according to claim 1, wherein in S3, the position of CBAM attention mechanism addition is in the ninth layer of YOLOv S network backhaul module.
CN202310655171.7A 2023-06-02 2023-06-02 Marine ship target identification method based on SAR image and YOLOv s network Active CN116994151B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310655171.7A CN116994151B (en) 2023-06-02 2023-06-02 Marine ship target identification method based on SAR image and YOLOv s network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310655171.7A CN116994151B (en) 2023-06-02 2023-06-02 Marine ship target identification method based on SAR image and YOLOv s network

Publications (2)

Publication Number Publication Date
CN116994151A CN116994151A (en) 2023-11-03
CN116994151B true CN116994151B (en) 2024-06-04

Family

ID=88525514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310655171.7A Active CN116994151B (en) 2023-06-02 2023-06-02 Marine ship target identification method based on SAR image and YOLOv s network

Country Status (1)

Country Link
CN (1) CN116994151B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576553B (en) * 2024-01-15 2024-04-02 中国海洋大学 Dual-polarized SAR image ocean ice vortex identification method and device and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418212A (en) * 2020-08-28 2021-02-26 西安电子科技大学 Improved YOLOv3 algorithm based on EIoU
CN113033303A (en) * 2021-02-09 2021-06-25 北京工业大学 Method for realizing SAR image rotating ship detection based on RCIoU loss
EP3874457A1 (en) * 2018-12-05 2021-09-08 Siemens Healthcare GmbH Three-dimensional shape reconstruction from a topogram in medical imaging
WO2022000855A1 (en) * 2020-06-29 2022-01-06 魔门塔(苏州)科技有限公司 Target detection method and device
CN114187491A (en) * 2022-02-17 2022-03-15 中国科学院微电子研究所 Method and device for detecting shielding object
CN114330529A (en) * 2021-12-24 2022-04-12 重庆邮电大学 Real-time pedestrian shielding detection method based on improved YOLOv4
CN114882423A (en) * 2022-06-09 2022-08-09 南京工业大学 Truck warehousing goods identification method based on improved Yolov5m model and Deepsort
CN115331183A (en) * 2022-08-25 2022-11-11 江苏大学 Improved YOLOv5s infrared target detection method
CN115497075A (en) * 2022-09-28 2022-12-20 西安交通大学 Traffic target detection method based on improved convolutional neural network and related device
CN115546650A (en) * 2022-10-29 2022-12-30 西安电子科技大学 Method for detecting ships in remote sensing image based on YOLO-V network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295678B (en) * 2016-07-27 2020-03-06 北京旷视科技有限公司 Neural network training and constructing method and device and target detection method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3874457A1 (en) * 2018-12-05 2021-09-08 Siemens Healthcare GmbH Three-dimensional shape reconstruction from a topogram in medical imaging
WO2022000855A1 (en) * 2020-06-29 2022-01-06 魔门塔(苏州)科技有限公司 Target detection method and device
CN112418212A (en) * 2020-08-28 2021-02-26 西安电子科技大学 Improved YOLOv3 algorithm based on EIoU
CN113033303A (en) * 2021-02-09 2021-06-25 北京工业大学 Method for realizing SAR image rotating ship detection based on RCIoU loss
CN114330529A (en) * 2021-12-24 2022-04-12 重庆邮电大学 Real-time pedestrian shielding detection method based on improved YOLOv4
CN114187491A (en) * 2022-02-17 2022-03-15 中国科学院微电子研究所 Method and device for detecting shielding object
CN114882423A (en) * 2022-06-09 2022-08-09 南京工业大学 Truck warehousing goods identification method based on improved Yolov5m model and Deepsort
CN115331183A (en) * 2022-08-25 2022-11-11 江苏大学 Improved YOLOv5s infrared target detection method
CN115497075A (en) * 2022-09-28 2022-12-20 西安交通大学 Traffic target detection method based on improved convolutional neural network and related device
CN115546650A (en) * 2022-10-29 2022-12-30 西安电子科技大学 Method for detecting ships in remote sensing image based on YOLO-V network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
复杂场景下基于增强YOLOv3的船舶目标检测;聂鑫;刘文;吴巍;;计算机应用(第09期);第2561-2570页 *

Also Published As

Publication number Publication date
CN116994151A (en) 2023-11-03

Similar Documents

Publication Publication Date Title
CN109447034B (en) Traffic sign detection method in automatic driving based on YOLOv3 network
US11334982B2 (en) Method for defect classification, method for training defect classifier, and apparatus thereof
CN112446370B (en) Method for identifying text information of nameplate of power equipment
CN116994151B (en) Marine ship target identification method based on SAR image and YOLOv s network
Huang et al. Ship target detection based on improved YOLO network
CN115829991A (en) Steel surface defect detection method based on improved YOLOv5s
CN110008899B (en) Method for extracting and classifying candidate targets of visible light remote sensing image
Moysset et al. Learning text-line localization with shared and local regression neural networks
CN112070151B (en) Target classification and identification method for MSTAR data image
CN115937655A (en) Target detection model of multi-order feature interaction, and construction method, device and application thereof
CN113159215A (en) Small target detection and identification method based on fast Rcnn
CN110991374B (en) Fingerprint singular point detection method based on RCNN
CN112926486A (en) Improved RFBnet target detection algorithm for ship small target
Fan et al. A novel sonar target detection and classification algorithm
Ghahremani et al. Towards parameter-optimized vessel re-identification based on IORnet
Sun et al. NSD‐SSD: a novel real‐time ship detector based on convolutional neural network in surveillance video
Sun et al. Image target detection algorithm compression and pruning based on neural network
CN113313128A (en) SAR image target detection method based on improved YOLOv3 network
CN111144469B (en) End-to-end multi-sequence text recognition method based on multi-dimensional associated time sequence classification neural network
CN114898290A (en) Real-time detection method and system for marine ship
Sureka et al. Word recognition techniques for Kannada handwritten documents
US20200134357A1 (en) Neural-network-based optical character recognition using specialized confidence functions
CN112819086B (en) Image classification method for calculating global optimal solution of single hidden layer ReLU neural network by dividing network space
CN113139077B (en) Method, device, terminal and storage medium for identifying ship identity
Mei et al. Target recognition and grabbing positioning method based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant