CN111144208A - Automatic detection and identification method for marine vessel target and target detector - Google Patents

Automatic detection and identification method for marine vessel target and target detector Download PDF

Info

Publication number
CN111144208A
CN111144208A CN201911156854.8A CN201911156854A CN111144208A CN 111144208 A CN111144208 A CN 111144208A CN 201911156854 A CN201911156854 A CN 201911156854A CN 111144208 A CN111144208 A CN 111144208A
Authority
CN
China
Prior art keywords
network
image
training
adopting
marine vessel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911156854.8A
Other languages
Chinese (zh)
Inventor
刘柳
吕腾
刘新新
文龙贻彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Times (Qingdao) marine equipment technology development Co.,Ltd.
Original Assignee
Beijing Aerospace Wanda Hi Tech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aerospace Wanda Hi Tech Ltd filed Critical Beijing Aerospace Wanda Hi Tech Ltd
Priority to CN201911156854.8A priority Critical patent/CN111144208A/en
Publication of CN111144208A publication Critical patent/CN111144208A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an automatic detection and identification method for a marine vessel target. The method comprises the following steps: (1) adopting a visible light camera to collect an image sample containing a marine vessel target, and manufacturing a marine vessel target image library based on the image sample, wherein the marine vessel target image library comprises a training set and a testing set; (2) constructing a deep neural network based on a fast-RCNN algorithm, and setting corresponding parameters; (3) performing off-line training on the neural network based on a training set to obtain a marine ship target detector; (4) and inputting a test set image, and detecting and identifying by using a marine vessel target detector. The invention adopts a convolution network layer sharing mode to establish a feature extraction network and a proposal frame generation network, can realize the maximum expression of image features and realize the feature extraction and learning based on pixels.

Description

Automatic detection and identification method for marine vessel target and target detector
Technical Field
The invention relates to an automatic detection and identification method for a marine ship target, and belongs to the field of target detection and identification.
Background
In recent years, the activities of ocean expansion such as ocean energy exploration, submarine environment exploration, and marine transportation are gradually increasing in various countries, and the demand for marine equipment such as deep submergence devices, marine exploration equipment, and marine transportation means is increasing. With the development of artificial intelligence technology, the application proportion of unmanned intelligent equipment in the military and civil fields is rapidly increased.
An Unmanned Surface Vehicle (USV) is an important application of Unmanned technology in a water environment, and is a platform capable of autonomous navigation and completing corresponding tasks in environments such as oceans, lakes, rivers and the like. The USV has the characteristics of small volume, low cost, flexibility, autonomy and the like. Compared with the traditional ship, due to the unmanned characteristic of the USV, the USV can adapt to the complex and changeable environments of oceans and other water areas, and can work in some inaccessible scenes of human beings. Compared with the existing static water unmanned equipment, the water surface unmanned aerial vehicle has the advantages of wider USV coverage area, stronger real-time performance and flexible operation at different positions. At present, the USV is widely used in military fields such as marine battlefield environment information collection, combat training and striking, and replenishment support, and civilian fields such as maritime search and rescue, environmental monitoring, hydrological environment exploration, port patrol, maritime tracking and law enforcement, and fishery fishing.
The key point of the unmanned ship capable of autonomous navigation on water lies in the fast and efficient autonomous path planning capability of the unmanned ship, and the excellent autonomous path planning capability mainly depends on the accurate perception of the unmanned ship on the surrounding environment. The visual sensor can intuitively sense the environment, so that the accurate information of the object in the environment can be acquired. Especially, with the rapid development of deep learning technology, the unmanned ship environment perception algorithm based on the vision sensor is a hot research direction in the field of artificial intelligence at present. However, in practical applications, the detection and identification of marine vessel targets with high efficiency and high precision are one of the difficulties in the field of pattern recognition due to the influence of factors such as marine complex weather conditions and illumination changes.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method overcomes the defects of the prior art, and provides an automatic detection and identification method of the marine ship target with high efficiency, high precision and strong robustness and a target detector.
The technical solution of the invention is as follows: an automatic detection and identification method for marine vessel targets comprises the following steps:
(1) adopting a visible light camera to collect an image sample containing a marine vessel target, and manufacturing a marine vessel target image library based on the image sample, wherein the marine vessel target image library comprises a training set and a testing set;
(2) constructing a deep neural network adopting a fast-RCNN algorithm, and setting corresponding parameters;
(3) performing off-line training on the deep neural network adopting the fast-RCNN algorithm by using a training set to obtain a marine ship target detector;
(4) and inputting a test set image, and detecting and identifying by using a marine vessel target detector.
The specific steps of the step (1) are as follows:
(1.1) manually labeling the obstacle in the image by using a minimum rectangle surrounding the marine obstacle, wherein the training set comprises the image and corresponding label information;
and (1.2) expanding the training set sample by adopting image translation, mirror image, noise addition and scaling methods.
The specific steps of the step (3) are as follows:
(3.1) downloading an ImageNet data set for pre-training to obtain an initial weight parameter of the fast-RCNN network;
(3.2) inputting the training set into a fast-RCNN network for off-line training;
(3.3) establishing a feature extraction network and a proposal frame generation network by adopting a deep neural network of a Faster-RCNN algorithm in a mode of sharing a convolution network layer; the feature extraction network comprises a plurality of convolution layers, a pooling layer and a full-connection layer; the proposal frame generation network comprises a convolution layer and two full-connection layers;
and (3.4) training the feature extraction network and the proposal box generation network by using the training set.
The specific steps of the step (3.4) are as follows:
(3.4.1) inputting the image and the proposal frame label, and training the proposal frame to generate a network; carrying out initialization assignment on the weight parameter by adopting a Gaussian distribution random number with zero mean value and 0.01 variance;
(3.4.2) training a feature extraction network by using the generated proposal frame; carrying out initialization assignment on the weight parameter by adopting a Gaussian distribution random number with zero mean value and 0.01 variance;
(3.4.3) reinitializing the proposal frame by using the detection network to generate a network, fixing the parameters of the shared layer, and only adjusting the parameters of the other layers;
and (3.4.4) fixing the parameters of the shared convolution layer, and readjusting the weight parameters of the full connection layer in the feature extraction network.
The specific steps of the step (4) are as follows:
(4.1) extracting image features of the test set image by adopting a feature extraction network, taking the last layer of the convolution layer in the feature extraction network as a feature image, and generating a proposal frame on the feature image by adopting a proposal frame generation network;
and (4.2) identifying the target in the proposal frame, and processing all identified areas by adopting a non-maximum suppression algorithm to obtain a final identification result.
A marine vessel target detector comprising:
the system comprises a first module, a second module and a third module, wherein the first module adopts a visible light camera to collect an image sample containing a marine ship target, and a marine ship target image library is manufactured based on the image sample and comprises a training set and a testing set;
the second module is used for constructing a deep neural network adopting a Faster-RCNN algorithm and setting corresponding parameters;
and the third module is used for performing off-line training on the deep neural network adopting the fast-RCNN algorithm by using a training set.
The first module manually labels the obstacle in the image by using a minimum rectangle surrounding the marine obstacle, and the training set comprises the image and corresponding label information; and the training set sample is expanded by adopting image translation, mirror image, noise adding and zooming methods.
The third module downloads ImageNet data sets for pre-training to obtain initial weight parameters of the Faster-RCNN network; inputting the training set into a fast-RCNN network for off-line training; establishing a feature extraction network and a proposal box generation network by adopting a deep neural network of a Faster-RCNN algorithm in a mode of sharing a convolution network layer; the feature extraction network comprises a plurality of convolution layers, a pooling layer and a full-connection layer; the proposal frame generation network comprises a convolution layer and two full-connection layers; and training the feature extraction network and the proposal frame generation network by utilizing the training set.
The specific method for training the feature extraction network and the proposal frame generation network by using the training set comprises the following steps:
(3.4.1) inputting the image and the proposal frame label, and training the proposal frame to generate a network; carrying out initialization assignment on the weight parameter by adopting a Gaussian distribution random number with zero mean value and 0.01 variance;
(3.4.2) training a feature extraction network by using the generated proposal frame; carrying out initialization assignment on the weight parameter by adopting a Gaussian distribution random number with zero mean value and 0.01 variance;
(3.4.3) reinitializing the proposal frame by using the detection network to generate a network, fixing the parameters of the shared layer, and only adjusting the parameters of the other layers;
and (3.4.4) fixing the parameters of the shared convolution layer, and readjusting the weight parameters of the full connection layer in the feature extraction network.
Compared with the prior art, the invention has the advantages that:
compared with the traditional marine target detection and identification method, the fast-RCNN algorithm establishes a feature extraction network and a proposal frame generation network in a convolution network layer sharing mode, can realize the maximum expression of image features, and realizes the feature extraction and learning based on pixels. In addition, the mode of network sharing of the convolution layer is adopted, so that the memory ratio can be greatly reduced, the operation time is reduced by times, and the real-time processing requirement can be met; by adopting the strategy of carrying out target identification in the proposal frame, the calculation amount can be reduced, and the calculation time can be further shortened.
Drawings
FIG. 1 is a flow chart of the operation of the present invention.
Fig. 2 is a general flow chart of the identification of the present invention.
FIG. 3 is a diagram of the detection and identification results of an embodiment of the present invention.
Detailed Description
The invention provides a method for automatically detecting and identifying a marine vessel target, which is further described in detail by combining the attached drawings and an embodiment. It is to be understood that the embodiments described herein are merely illustrative and explanatory of the invention and are not restrictive thereof.
As shown in fig. 1, an embodiment of the present invention provides a method for automatically detecting and identifying a marine vessel target, including the following steps:
(1) adopting a visible light camera to collect an image sample containing a marine vessel target, and manufacturing a marine vessel target image library based on the image sample, wherein the marine vessel target image library comprises a training set and a testing set;
(2) constructing a deep neural network based on a fast-RCNN algorithm, and setting corresponding parameters;
(3) performing off-line training on the deep neural network based on the fast-RCNN algorithm based on a training set to obtain a marine ship target detector;
(4) and inputting a test set image, and detecting and identifying by using a marine vessel target detector.
The specific method of the step (1) is as follows:
(1.1): manually labeling the obstacle in the image by using a minimum rectangle surrounding the marine obstacle, wherein the training set comprises the image and corresponding label information;
(1.2): and expanding the training set sample by adopting methods of image translation, mirror image, noise addition, scaling and the like.
The specific method of the step (3) is as follows:
(3.1) downloading an ImageNet data set for pre-training to obtain an initial weight parameter of the fast-RCNN network;
(3.2) inputting the training set into a fast-RCNN network for off-line training;
and (3.3) establishing a feature extraction network and a proposal box generation network by adopting a mode based on a shared convolution network layer based on the deep neural network of the fast-RCNN algorithm. The feature extraction network comprises a plurality of convolutional layers, a pooling layer and a full-link layer. The proposed frame generation network comprises a convolutional layer and two fully-connected layers.
(3.4) training the feature extraction network and the proposal frame generation network by using the training set, which specifically comprises the following steps:
(3.4.1) firstly, inputting the image and the label of the proposal frame, training the proposal frame and generating the network. And carrying out initialization assignment on the weight parameters by adopting a Gaussian distributed random number with a mean value of zero and a variance of 0.01.
(3.4.2) training a feature extraction network using the generated proposal box. The data initialization scheme is as above. At this point, the two networks do not share the convolutional layer.
(3.4.3) reinitialize the proposal frame generation network with the entire detection network, but fix the shared layer parameters and only adjust the parameters of the remaining layers. The two networks now share the parameters of the convolutional layer.
And (3.4.4) finally, fixing the parameters of the shared convolution layer, and readjusting the weight parameters of the full connection layer in the feature extraction network.
And (3.4.5) finishing the training of the whole network.
During training, all parameters needing initialization, such as network layer weight parameters, are assigned randomly by adopting a Gaussian distribution function with zero mean and variance of 0.01. The first three quarters of the samples involved in the training were given a learning rate of 0.001 and the last quarter was given a learning rate of 0.0001. The momentum during the gradient descent was chosen to be 0.9 and the parametric weight attenuation value was set to 0.0005 (the above values are dimensionless).
The specific method of the step (4) comprises the following steps:
(4.1) extracting image features of the test set image by adopting a feature extraction network, taking the last layer of the convolution layer in the feature extraction network as a feature image, and generating a proposal frame on the feature image by adopting a proposal frame generation network;
and (4.2) identifying the target in the proposal frame, and processing all identified regions by adopting a non-maximum suppression (NMS) algorithm to obtain a final identification result.
It should be noted that, because the method of generating the network by using the proposal box is adopted, error back propagation can be realized by using end-to-end training.
According to the above error back propagation relation, the error of each layer can be calculated layer by layer. And updating the weight parameter of each layer according to the error value.
Fig. 2 is a schematic diagram of the general identification flow of the present invention, and the entire identification network includes a feature extraction deep learning network and a proposal frame generation network (only one convolutional layer and a full connection layer are shown in the diagram).
Fig. 3 is a diagram showing detection and identification results according to an embodiment of the present invention, which shows that ships of different types, distances, and sizes can be detected and identified by the present invention.
The present invention has not been described in detail, partly as is known to the person skilled in the art.

Claims (9)

1. An automatic detection and identification method for a marine vessel target is characterized by comprising the following steps:
(1) adopting a visible light camera to collect an image sample containing a marine vessel target, and manufacturing a marine vessel target image library based on the image sample, wherein the marine vessel target image library comprises a training set and a testing set;
(2) constructing a deep neural network adopting a fast-RCNN algorithm, and setting corresponding parameters;
(3) performing off-line training on the deep neural network adopting the fast-RCNN algorithm by using a training set to obtain a marine ship target detector;
(4) and inputting a test set image, and detecting and identifying by using a marine vessel target detector.
2. The method for automatically detecting and identifying a marine vessel target according to claim 1, wherein the specific steps of the step (1) are as follows:
(1.1) manually labeling the obstacle in the image by using a minimum rectangle surrounding the marine obstacle, wherein the training set comprises the image and corresponding label information;
and (1.2) expanding the training set sample by adopting image translation, mirror image, noise addition and scaling methods.
3. The method for automatically detecting and identifying marine vessel targets as claimed in claim 1 or 2, wherein the step (3) comprises the following specific steps:
(3.1) downloading an ImageNet data set for pre-training to obtain an initial weight parameter of the fast-RCNN network;
(3.2) inputting the training set into a fast-RCNN network for off-line training;
(3.3) establishing a feature extraction network and a proposal frame generation network by adopting a deep neural network of a Faster-RCNN algorithm in a mode of sharing a convolution network layer; the feature extraction network comprises a plurality of convolution layers, a pooling layer and a full-connection layer; the proposal frame generation network comprises a convolution layer and two full-connection layers;
and (3.4) training the feature extraction network and the proposal box generation network by using the training set.
4. A method for automatic detection and identification of a marine vessel target according to claim 3, characterized in that the specific steps of step (3.4) are:
(3.4.1) inputting the image and the proposal frame label, and training the proposal frame to generate a network; carrying out initialization assignment on the weight parameter by adopting a Gaussian distribution random number with zero mean value and 0.01 variance;
(3.4.2) training a feature extraction network by using the generated proposal frame; carrying out initialization assignment on the weight parameter by adopting a Gaussian distribution random number with zero mean value and 0.01 variance;
(3.4.3) reinitializing the proposal frame by using the detection network to generate a network, fixing the parameters of the shared layer, and only adjusting the parameters of the other layers;
and (3.4.4) fixing the parameters of the shared convolution layer, and readjusting the weight parameters of the full connection layer in the feature extraction network.
5. The method for automatically detecting and identifying a marine vessel target according to claim 4, wherein the specific steps of the step (4) are as follows:
(4.1) extracting image features of the test set image by adopting a feature extraction network, taking the last layer of the convolution layer in the feature extraction network as a feature image, and generating a proposal frame on the feature image by adopting a proposal frame generation network;
and (4.2) identifying the target in the proposal frame, and processing all identified areas by adopting a non-maximum suppression algorithm to obtain a final identification result.
6. A marine vessel target detector, comprising:
the system comprises a first module, a second module and a third module, wherein the first module adopts a visible light camera to collect an image sample containing a marine ship target, and a marine ship target image library is manufactured based on the image sample and comprises a training set and a testing set;
the second module is used for constructing a deep neural network adopting a Faster-RCNN algorithm and setting corresponding parameters;
and the third module is used for performing off-line training on the deep neural network adopting the fast-RCNN algorithm by using a training set.
7. The marine vessel target detector of claim 6, wherein the first module manually labels the obstacle in the image using a minimum rectangle surrounding the marine obstacle, the training set comprising the image and its corresponding label information; and the training set sample is expanded by adopting image translation, mirror image, noise adding and zooming methods.
8. The marine vessel goal detector of claim 7, wherein said third module downloads ImageNet data sets pre-trained to initial weight parameters of the fast-RCNN network; inputting the training set into a fast-RCNN network for off-line training; establishing a feature extraction network and a proposal box generation network by adopting a deep neural network of a Faster-RCNN algorithm in a mode of sharing a convolution network layer; the feature extraction network comprises a plurality of convolution layers, a pooling layer and a full-connection layer; the proposal frame generation network comprises a convolution layer and two full-connection layers; and training the feature extraction network and the proposal frame generation network by utilizing the training set.
9. The marine vessel target detector of claim 8, wherein the specific method for training the feature extraction network and the proposal box generation network by using the training set comprises:
(3.4.1) inputting the image and the proposal frame label, and training the proposal frame to generate a network; carrying out initialization assignment on the weight parameter by adopting a Gaussian distribution random number with zero mean value and 0.01 variance;
(3.4.2) training a feature extraction network by using the generated proposal frame; carrying out initialization assignment on the weight parameter by adopting a Gaussian distribution random number with zero mean value and 0.01 variance;
(3.4.3) reinitializing the proposal frame by using the detection network to generate a network, fixing the parameters of the shared layer, and only adjusting the parameters of the other layers;
and (3.4.4) fixing the parameters of the shared convolution layer, and readjusting the weight parameters of the full connection layer in the feature extraction network.
CN201911156854.8A 2019-11-22 2019-11-22 Automatic detection and identification method for marine vessel target and target detector Pending CN111144208A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911156854.8A CN111144208A (en) 2019-11-22 2019-11-22 Automatic detection and identification method for marine vessel target and target detector

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911156854.8A CN111144208A (en) 2019-11-22 2019-11-22 Automatic detection and identification method for marine vessel target and target detector

Publications (1)

Publication Number Publication Date
CN111144208A true CN111144208A (en) 2020-05-12

Family

ID=70517261

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911156854.8A Pending CN111144208A (en) 2019-11-22 2019-11-22 Automatic detection and identification method for marine vessel target and target detector

Country Status (1)

Country Link
CN (1) CN111144208A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112434794A (en) * 2020-11-30 2021-03-02 国电南瑞科技股份有限公司 Computer vision data set semi-automatic labeling method and system based on deep learning
CN112464879A (en) * 2020-12-10 2021-03-09 山东易视智能科技有限公司 Ocean target detection method and system based on self-supervision characterization learning
CN113313166A (en) * 2021-05-28 2021-08-27 华南理工大学 Ship target automatic labeling method based on feature consistency learning
CN115062839A (en) * 2022-06-13 2022-09-16 陕西省地震局 Extreme seismic region intensity evaluation method and system, electronic device and readable storage medium
CN115062839B (en) * 2022-06-13 2024-07-30 陕西省地震局 Method, system, electronic equipment and readable storage medium for evaluating intensity of extremely-seismic region

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156744A (en) * 2016-07-11 2016-11-23 西安电子科技大学 SAR target detection method based on CFAR detection with degree of depth study
CN106504233A (en) * 2016-10-18 2017-03-15 国网山东省电力公司电力科学研究院 Image electric power widget recognition methodss and system are patrolled and examined based on the unmanned plane of Faster R CNN
CN107480730A (en) * 2017-09-05 2017-12-15 广州供电局有限公司 Power equipment identification model construction method and system, the recognition methods of power equipment
CN108052940A (en) * 2017-12-17 2018-05-18 南京理工大学 SAR remote sensing images waterborne target detection methods based on deep learning
CN108256634A (en) * 2018-02-08 2018-07-06 杭州电子科技大学 A kind of ship target detection method based on lightweight deep neural network
CN108596030A (en) * 2018-03-20 2018-09-28 杭州电子科技大学 Sonar target detection method based on Faster R-CNN
CN109376634A (en) * 2018-10-15 2019-02-22 北京航天控制仪器研究所 A kind of Bus driver unlawful practice detection system neural network based
CN109934088A (en) * 2019-01-10 2019-06-25 海南大学 Sea ship discrimination method based on deep learning
CN110188696A (en) * 2019-05-31 2019-08-30 华南理工大学 A kind of water surface is unmanned to equip multi-source cognitive method and system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156744A (en) * 2016-07-11 2016-11-23 西安电子科技大学 SAR target detection method based on CFAR detection with degree of depth study
CN106504233A (en) * 2016-10-18 2017-03-15 国网山东省电力公司电力科学研究院 Image electric power widget recognition methodss and system are patrolled and examined based on the unmanned plane of Faster R CNN
CN107480730A (en) * 2017-09-05 2017-12-15 广州供电局有限公司 Power equipment identification model construction method and system, the recognition methods of power equipment
CN108052940A (en) * 2017-12-17 2018-05-18 南京理工大学 SAR remote sensing images waterborne target detection methods based on deep learning
CN108256634A (en) * 2018-02-08 2018-07-06 杭州电子科技大学 A kind of ship target detection method based on lightweight deep neural network
CN108596030A (en) * 2018-03-20 2018-09-28 杭州电子科技大学 Sonar target detection method based on Faster R-CNN
CN109376634A (en) * 2018-10-15 2019-02-22 北京航天控制仪器研究所 A kind of Bus driver unlawful practice detection system neural network based
CN109934088A (en) * 2019-01-10 2019-06-25 海南大学 Sea ship discrimination method based on deep learning
CN110188696A (en) * 2019-05-31 2019-08-30 华南理工大学 A kind of water surface is unmanned to equip multi-source cognitive method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
冯珂等: "基于神经网络的高分辨率快速目标检测方法", 《电子设计工程》 *
吴天舒等: "基于改进SSD的轻量化小目标检测算法", 《红外与激光工程》 *
赵春晖等: "基于改进Faster R-CNN算法的舰船目标检测与识别", 《沈阳大学学报(自然科学版)》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112434794A (en) * 2020-11-30 2021-03-02 国电南瑞科技股份有限公司 Computer vision data set semi-automatic labeling method and system based on deep learning
CN112464879A (en) * 2020-12-10 2021-03-09 山东易视智能科技有限公司 Ocean target detection method and system based on self-supervision characterization learning
CN112464879B (en) * 2020-12-10 2022-04-01 山东易视智能科技有限公司 Ocean target detection method and system based on self-supervision characterization learning
CN113313166A (en) * 2021-05-28 2021-08-27 华南理工大学 Ship target automatic labeling method based on feature consistency learning
CN115062839A (en) * 2022-06-13 2022-09-16 陕西省地震局 Extreme seismic region intensity evaluation method and system, electronic device and readable storage medium
CN115062839B (en) * 2022-06-13 2024-07-30 陕西省地震局 Method, system, electronic equipment and readable storage medium for evaluating intensity of extremely-seismic region

Similar Documents

Publication Publication Date Title
Lu et al. CONet: A cognitive ocean network
CN111144208A (en) Automatic detection and identification method for marine vessel target and target detector
CN108230302B (en) Detection and disposal method for marine organism invading from cold source sea area of nuclear power plant
Liu et al. Detection and pose estimation for short-range vision-based underwater docking
CN110889324A (en) Thermal infrared image target identification method based on YOLO V3 terminal-oriented guidance
Bagnitsky et al. Side scan sonar using for underwater cables & pipelines tracking by means of AUV
Qingqing et al. Towards active vision with UAVs in marine search and rescue: Analyzing human detection at variable altitudes
CN116245916B (en) Unmanned ship-oriented infrared ship target tracking method and device
KR102373493B1 (en) Learning method and learning device for updating hd map by reconstructing 3d space by using depth estimation information and class information on each object, which have been acquired through v2x information integration technique, and testing method and testing device using the same
US5537511A (en) Neural network based data fusion system for source localization
Wang et al. Robust AUV visual loop-closure detection based on variational autoencoder network
CN109859202A (en) A kind of deep learning detection method based on the tracking of USV water surface optical target
Radeta et al. Deep learning and the oceans
CN112880678A (en) Unmanned ship navigation planning method in complex water area environment
CN110569387B (en) Radar-image cross-modal retrieval method based on depth hash algorithm
CN117214904A (en) Intelligent fish identification monitoring method and system based on multi-sensor data
Gopal et al. Tiny object detection: Comparative study using single stage CNN object detectors
CN115115863A (en) Water surface multi-scale target detection method, device and system and storage medium
CN112268564B (en) Unmanned aerial vehicle landing space position and attitude end-to-end estimation method
CN110119671A (en) Underwater cognitive method based on artificial side line visual image
CN117454680A (en) Ocean search cluster design system and method
Yao et al. LiDAR based navigable region detection for unmanned surface vehicles
CN116704688A (en) Ocean buoy passive defense method and system
Zhou et al. A real-time algorithm for visual detection of high-speed unmanned surface vehicle based on deep learning
CN107941220B (en) Unmanned ship sea antenna detection and navigation method and system based on vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220408

Address after: 266200 aoshanwei sub district office, Jimo District, Qingdao City, Shandong Province

Applicant after: Aerospace Times (Qingdao) marine equipment technology development Co.,Ltd.

Address before: 142 box 403, box 100854, Beijing, Beijing, Haidian District

Applicant before: BEIJIGN INSTITUTE OF AEROSPACE CONTROL DEVICES

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20200512

RJ01 Rejection of invention patent application after publication