CN116665088A - Ship identification and detection method, device, equipment and medium - Google Patents

Ship identification and detection method, device, equipment and medium Download PDF

Info

Publication number
CN116665088A
CN116665088A CN202310499439.2A CN202310499439A CN116665088A CN 116665088 A CN116665088 A CN 116665088A CN 202310499439 A CN202310499439 A CN 202310499439A CN 116665088 A CN116665088 A CN 116665088A
Authority
CN
China
Prior art keywords
yolov7
network model
improved
ship
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310499439.2A
Other languages
Chinese (zh)
Inventor
李修来
刘笑嶂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan University
Original Assignee
Hainan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan University filed Critical Hainan University
Priority to CN202310499439.2A priority Critical patent/CN116665088A/en
Publication of CN116665088A publication Critical patent/CN116665088A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method, a device, equipment and a medium for identifying and detecting ships. The method comprises the following steps: inputting the video frames to be identified into an improved YOLOv7 network model, adjusting the size of the video frames to be identified, and extracting different size characteristics of the video frames to be identified after the size is adjusted; fusing the features with different sizes based on a feature pyramid network of path aggregation in the improved YOLOv7 network model to obtain fusion features; and carrying out prediction analysis on the fusion characteristics through a prediction network in the improved YOLOv7 network model to obtain a ship category prediction result. The application can effectively identify ships with irregular shapes and different sizes, and the detection precision and the robustness are improved.

Description

Ship identification and detection method, device, equipment and medium
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a method, an apparatus, a device, and a medium for identifying and detecting a ship.
Background
The most commonly used ship identification techniques mainly include: AIS (automatic identification System) based on computer vision, GPS based on LiDAR (light detection and ranging), and HAIS based on ECDIS (electronic chart display and information System). The ship identification method based on the communication and navigation equipment has the advantages, but has a large limitation in the traffic-dense water area because the visual image of the ship cannot be obtained in the traffic-dense water area such as a port.
At present, research on a ship identification method based on deep learning has been advanced to a certain extent, and identification of ships in marine traffic videos and images is realized through automatic extraction of ship image features and continuous learning training. Mainly adopts two methods: one is to introduce a deep learning algorithm into the ship identification, and the other is to make full use of the image information through an image preprocessing process. In recent years, a target detection method based on deep learning, such as YOLO, has been remarkably successful in ship detection and identification, but the YOLOv7 model has a certain limitation in adapting to different ship types and sizes. Thus, there is a need for an improved YOLOv7 model that achieves better performance in ship detection and identification.
Disclosure of Invention
In order to solve the technical problems, the application provides a method, a device, equipment and a medium for identifying and detecting ships, which can effectively identify ships with irregular shapes and different sizes, and improve the detection precision and the robustness.
In order to achieve the above purpose, the technical scheme of the application is as follows:
a method of identifying and detecting a vessel, comprising the steps of:
inputting the video frames to be identified into an improved YOLOv7 network model, adjusting the size of the video frames to be identified, and extracting different size characteristics of the video frames to be identified after the size is adjusted;
fusing the features with different sizes based on a feature pyramid network of path aggregation in the improved YOLOv7 network model to obtain fusion features;
and carrying out prediction analysis on the fusion characteristics through a prediction network in the improved YOLOv7 network model to obtain a ship category prediction result.
Preferably, the training process of the improved YOLOv7 network model comprises the following steps:
acquiring a dataset having irregularly shaped and differently sized ship images;
randomly sampling the data set, and determining a training set and a testing set according to the randomly sampled data set;
training the initial YOLOv7 network model based on the training set to obtain a prediction result, comparing the prediction result with actual data, and if the comparison result does not meet the preset requirement, adjusting parameters to continue training until the preset requirement is met or the preset iteration number is reached, so as to obtain the improved YOLOv7 network model.
Preferably, the method further comprises the following steps: the improved YOLOv7 network model was tested using the test set and model evaluation was performed by accuracy, recall and average precision mean index.
Preferably, before training the initial YOLOv7 network model based on the training set, the method further comprises the following steps: and performing image enhancement processing on the data set.
Preferably, the image enhancement includes random rotation, random scaling, and random cropping.
Preferably, the size of the anchor frame in the YOLOv7 network model is determined by adopting a K-means clustering method.
Based on the above, the application also discloses a device for identifying and detecting the ship, which comprises an input module, an extraction module, a fusion module and a prediction module, wherein,
the input module is used for adjusting the size of an input image;
the extraction module is used for extracting the features of different sizes of the image after the size adjustment;
the fusion module is used for fusing the features with different sizes by adopting a feature pyramid network of path aggregation to obtain fusion features;
the prediction module is used for performing prediction analysis on the fusion characteristics through a prediction network in the improved YOLOv7 network model to obtain a ship category prediction result.
Based on the above, the present application also discloses a computer device, including: a memory for storing a computer program; a processor for implementing a method as claimed in any one of the preceding claims when executing the computer program.
Based on the foregoing, the present application also discloses a readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as described in any of the above.
Based on the technical scheme, the application has the beneficial effects that: the method comprises the steps of inputting a video frame to be identified into an improved YOLOv7 network model, adjusting the size of the video frame to be identified, and extracting different size characteristics of the video frame to be identified after the size is adjusted; fusing the features with different sizes based on a feature pyramid network of path aggregation in the improved YOLOv7 network model to obtain fusion features; the fusion characteristics are predicted and analyzed through a prediction network in the improved YOLOv7 network model, so that a ship class prediction result is obtained, and the detection precision and the robustness are improved to a certain extent.
Drawings
FIG. 1 is a flow chart of a method of identifying and detecting a vessel in one embodiment;
FIG. 2 is a schematic diagram of a YOLOv7 network architecture;
FIG. 3 is a graph of recall in a modified YOLOv7 training process;
FIG. 4 is a graph of accuracy during a modified YOLOv7 training process;
FIG. 5 is a mAP of mAP during a modified Yolov7 training process;
FIG. 6 is a graph comparing precision recall (P-R) curves for SSD, YOLOv7, and modified YOLOv 7;
FIG. 7 is a graph of class accuracy and mAP values of SSD, YOLOv7, and modified YOLOv 7;
FIG. 8 is a schematic structural view of a device for identifying and detecting a ship in one embodiment;
FIG. 9 is a schematic diagram of a computer device in one embodiment.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application.
In one embodiment, as shown in fig. 1, there is provided a method for identifying and detecting a ship, comprising the steps of:
inputting the video frames to be identified into an improved YOLOv7 network model, adjusting the size of the video frames to be identified, and extracting different size characteristics of the video frames to be identified after the size is adjusted;
fusing the features with different sizes based on a feature pyramid network of path aggregation in the improved YOLOv7 network model to obtain fusion features;
and carrying out prediction analysis on the fusion characteristics through a prediction network in the improved YOLOv7 network model to obtain a ship category prediction result.
Specifically, the YOLOv7 network model consists of four modules, namely an input module, a backbone, a Head module, and a prediction module, as shown in fig. 2. Since the input images generally have different sizes and are not fixed, the size of the input image needs to be adjusted to a fixed size when performing target detection. The function of the input module is to scale the image to meet the size requirements of the backbone, which consists of several BConv convolution layers, E-ELAN convolution layers and MPConv convolution layers. The E-ELAN convolution layer keeps the original ELAN design architecture, and learns more and various features by guiding the calculation blocks of different feature groups so as to improve the learning capacity of the network without damaging the original gradient path. The MPConv convolution layer adds a Maxpool layer on the BConv layer, forming two branches, the upper branch halving the image aspect by Maxpool and the image channel by BConv. The lower branch uses the first BConv layer to halve the image channel, the second BConw layer halves the image aspect, and finally the Cat operation is used to fuse the features extracted from the upper branch and the lower branch together so as to improve the feature extraction capability of the network. The Head module consists of a Path Aggregation Feature Pyramid Network (PAFPN) structure, and information can be more easily transmitted from the bottom to the top by introducing a path from bottom to top, so that efficient fusion of different levels of features is realized, and the accuracy and the robustness of ship detection are improved. The prediction module uses a REP (RepVGG block) structure to adjust the number of image channels of three different scale features output by PAFPN, and finally uses a 1 x 1 convolution for confidence, class, and anchor frame prediction.
In the traditional YOLOv7 network model, the anchor boxes are fixed and cannot accommodate targets of different sizes and shapes. In order to better adapt to the conditions of ship size change and irregular shape in a ship detection task, according to the size distribution of the ship in the data set, the size of an anchor box for ship detection is determined by adopting a K-means clustering method, so that the detection precision is improved.
In a method of marine vessel identification and detection of one embodiment, a training process for providing an improved YOLOv7 network model, comprising the steps of: acquiring a dataset having irregularly shaped and differently sized ship images; randomly sampling the data set, and determining a training set and a testing set according to the randomly sampled data set; training the initial YOLOv7 network model based on the training set to obtain a prediction result, comparing the prediction result with actual data, and if the comparison result does not meet the preset requirement, adjusting parameters to continue training until the preset requirement is met or the preset iteration number is reached, so as to obtain an improved YOLOv7 network model; the improved YOLOv7 network model was tested using the test set and model evaluation was performed by accuracy, recall and average precision mean index.
Specifically, the ability of the improved YOLOv7 network model in ship detection was trained and evaluated through the seashps dataset. 7000 1920×1080 images were collected in the seaclips dataset. Each image in the seaslide dataset was annotated with the exact ship tag and bounding box, which was created from images captured by an on-site video surveillance system deployed around the zhuhai cross-organ island in china. The images selected for the dataset images cover different features including different boat types, hull sections, proportions, viewing angles, illumination, and different occlusion levels in various complex environments. Different levels of masking in complex environments.
And randomly sampling the Seashis data set, and determining a training set and a testing set according to the randomly sampled data set. In order to solve the problems of complex illumination conditions, image noise and the like in the marine environment, before training an initial YOLOv7 network model based on a training set, image enhancement processing is performed on the data set, wherein the image enhancement processing comprises random rotation, random scaling, random clipping and other methods to increase the diversity and robustness of the data. By data expansion, the generalization capability of the model can be improved without adding marked data.
Precision, recall, and mAP (overall class average Precision) were selected as evaluation indexes. The values of the evaluation metrics are all in the range of 0,1, and the closer the evaluation metric value is to 1, the better the detection accuracy and the better the model.
In the training phase of the model, the change in Recall can be observed from fig. 3, while the change in Precision can be seen in fig. 4, and the change in mAP can be derived from fig. 5. The amplitude of the Precision and Recall changes gradually decreased over 100 periods of training. However, precision and Recall fluctuate greatly during training due to certain annotation errors in the experimental dataset and uneven data distribution. After about 50 iterations of the model, the variation in mAP gradually decreases. After 200 iterations of the modified YOLOv7 algorithm, the final maps of the model remained at 90.15%.
FIG. 6 shows graphs of SSD, YOLOv7, and modified YOLOv7 precision recall (P-R) in the field of marine identification. It is apparent that the area enclosed by the P-R curve formed by the modified YOLOv7 model is significantly larger than the other two models. It is well known that the area enclosed by the P-R curve and the coordinate axis is the mAP value, and that higher mAP values indicate better detection performance of the model. Therefore, the results show that the improved YOLOv7 is superior to SSD and YOLOv7 in the field of ship identification detection. Figure 7 shows class accuracy and mAP values for three models of six different classes of vessels. The improved YOLOv7 algorithm has the detection accuracy reaching 90.15%, is particularly high in mAP reaching 91.89% for small-sized fishing vessels, and is obviously superior to SSD and YOLOv7. The result highlights the advantages of the improved YOLOv7 model in the aspects of identifying and detecting the inland river ships, and can meet the detection requirement of intelligent ship navigation.
In one embodiment, as shown in fig. 8, an apparatus 800 for identifying and detecting a ship is provided, comprising an input module 810, an extraction module 820, a fusion module 830, and a prediction module 840, wherein,
an input module 810 for adjusting the size of an input image;
an extraction module 820 for extracting different size features of the resized image;
the fusion module 830 is configured to fuse features of different sizes by using a feature pyramid network of path aggregation to obtain fusion features;
and the prediction module 840 is used for performing prediction analysis on the fusion characteristics through a prediction network in the improved YOLOv7 network model to obtain a ship category prediction result.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 9. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by persons skilled in the art that the architecture shown in fig. 9 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting as to the computer device to which the present inventive arrangements are applicable, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components. Specifically, the processor, when executing the computer program, performs the steps of:
inputting the video frames to be identified into an improved YOLOv7 network model, adjusting the size of the video frames to be identified, and extracting different size characteristics of the video frames to be identified after the size is adjusted;
fusing the features with different sizes based on a feature pyramid network of path aggregation in the improved YOLOv7 network model to obtain fusion features;
and carrying out prediction analysis on the fusion characteristics through a prediction network in the improved YOLOv7 network model to obtain a ship category prediction result.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
inputting the video frames to be identified into an improved YOLOv7 network model, adjusting the size of the video frames to be identified, and extracting different size characteristics of the video frames to be identified after the size is adjusted;
fusing the features with different sizes based on a feature pyramid network of path aggregation in the improved YOLOv7 network model to obtain fusion features;
and carrying out prediction analysis on the fusion characteristics through a prediction network in the improved YOLOv7 network model to obtain a ship category prediction result.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (9)

1. The ship identification and detection method is characterized by comprising the following steps of:
inputting the video frames to be identified into an improved YOLOv7 network model, adjusting the size of the video frames to be identified, and extracting different size characteristics of the video frames to be identified after the size is adjusted;
fusing the features with different sizes based on a feature pyramid network of path aggregation in the improved YOLOv7 network model to obtain fusion features;
and carrying out prediction analysis on the fusion characteristics through a prediction network in the improved YOLOv7 network model to obtain a ship category prediction result.
2. The method for identifying and detecting a ship according to claim 1, wherein the training process of the improved YOLOv7 network model comprises the following steps:
acquiring a dataset having irregularly shaped and differently sized ship images;
randomly sampling the data set, and determining a training set and a testing set according to the randomly sampled data set;
training the initial YOLOv7 network model based on the training set to obtain a prediction result, comparing the prediction result with actual data, and if the comparison result does not meet the preset requirement, adjusting parameters to continue training until the preset requirement is met or the preset iteration number is reached, so as to obtain the improved YOLOv7 network model.
3. A method of identifying and detecting a vessel according to claim 2, further comprising the steps of:
the improved YOLOv7 network model was tested using the test set and model evaluation was performed by accuracy, recall and average precision mean index.
4. The method for identifying and detecting a ship according to claim 2, further comprising the steps of, before training the initial YOLOv7 network model based on the training set:
and performing image enhancement processing on the data set.
5. A method of vessel identification and detection according to claim 4 wherein said image enhancement comprises random rotation, random scaling and random cropping.
6. The method for identifying and detecting the ship according to claim 2, wherein the size of the anchor frame in the YOLOv7 network model is determined by a K-means clustering method.
7. The device for identifying and detecting the ship is characterized by comprising an input module, an extraction module, a fusion module and a prediction module, wherein,
the input module is used for adjusting the size of an input image;
the extraction module is used for extracting the features of different sizes of the image after the size adjustment;
the fusion module is used for fusing the features with different sizes by adopting a feature pyramid network of path aggregation to obtain fusion features;
the prediction module is used for performing prediction analysis on the fusion characteristics through a prediction network in the improved YOLOv7 network model to obtain a ship category prediction result.
8. A computer device, comprising: a memory for storing a computer program; a processor for implementing the method according to any one of claims 1 to 6 when executing the computer program.
9. A readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, implements the method according to any of claims 1 to 6.
CN202310499439.2A 2023-05-06 2023-05-06 Ship identification and detection method, device, equipment and medium Pending CN116665088A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310499439.2A CN116665088A (en) 2023-05-06 2023-05-06 Ship identification and detection method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310499439.2A CN116665088A (en) 2023-05-06 2023-05-06 Ship identification and detection method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN116665088A true CN116665088A (en) 2023-08-29

Family

ID=87710277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310499439.2A Pending CN116665088A (en) 2023-05-06 2023-05-06 Ship identification and detection method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN116665088A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652321A (en) * 2020-06-10 2020-09-11 江苏科技大学 Offshore ship detection method based on improved YOLOV3 algorithm
CN114241377A (en) * 2021-12-16 2022-03-25 海南大学 Ship target detection method, device, equipment and medium based on improved YOLOX
WO2022120901A1 (en) * 2020-12-09 2022-06-16 中国科学院深圳先进技术研究院 Image detection model training method based on feature pyramid, medium, and device
CN114782798A (en) * 2022-04-19 2022-07-22 杭州电子科技大学 Underwater target detection method based on attention fusion
CN115798133A (en) * 2022-10-20 2023-03-14 中国兵器装备集团自动化研究所有限公司 Flame alarm method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652321A (en) * 2020-06-10 2020-09-11 江苏科技大学 Offshore ship detection method based on improved YOLOV3 algorithm
WO2022120901A1 (en) * 2020-12-09 2022-06-16 中国科学院深圳先进技术研究院 Image detection model training method based on feature pyramid, medium, and device
CN114241377A (en) * 2021-12-16 2022-03-25 海南大学 Ship target detection method, device, equipment and medium based on improved YOLOX
CN114782798A (en) * 2022-04-19 2022-07-22 杭州电子科技大学 Underwater target detection method based on attention fusion
CN115798133A (en) * 2022-10-20 2023-03-14 中国兵器装备集团自动化研究所有限公司 Flame alarm method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110310264B (en) DCNN-based large-scale target detection method and device
CN110852285B (en) Object detection method and device, computer equipment and storage medium
CN111191533B (en) Pedestrian re-recognition processing method, device, computer equipment and storage medium
CN114241377A (en) Ship target detection method, device, equipment and medium based on improved YOLOX
CN113569667A (en) Inland ship target identification method and system based on lightweight neural network model
Wang et al. Ship detection based on fused features and rebuilt YOLOv3 networks in optical remote-sensing images
CN112132131B (en) Measuring cylinder liquid level identification method and device
CN116665095B (en) Method and system for detecting motion ship, storage medium and electronic equipment
CN111091095A (en) Method for detecting ship target in remote sensing image
CN113706481A (en) Sperm quality detection method, sperm quality detection device, computer equipment and storage medium
CN113705375A (en) Visual perception device and method for ship navigation environment
CN111582182A (en) Ship name identification method, system, computer equipment and storage medium
CN114037907A (en) Detection method and device for power transmission line, computer equipment and storage medium
CN110991374B (en) Fingerprint singular point detection method based on RCNN
CN113487610A (en) Herpes image recognition method and device, computer equipment and storage medium
CN112037225A (en) Marine ship image segmentation method based on convolutional nerves
Fan et al. A novel sonar target detection and classification algorithm
CN116597343A (en) Expressway weather identification method and device based on ensemble learning algorithm
CN111553183A (en) Ship detection model training method, ship detection method and ship detection device
CN106469293A (en) The method and system of quick detection target
CN116778176B (en) SAR image ship trail detection method based on frequency domain attention
CN113610178A (en) Inland ship target detection method and device based on video monitoring image
CN111368599A (en) Remote sensing image sea surface ship detection method and device, readable storage medium and equipment
CN117456346A (en) Underwater synthetic aperture sonar image target detection method and system
CN116665088A (en) Ship identification and detection method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination