CN114898308A - Ship cockpit position detection method and system based on deep convolutional neural network - Google Patents

Ship cockpit position detection method and system based on deep convolutional neural network Download PDF

Info

Publication number
CN114898308A
CN114898308A CN202210823174.2A CN202210823174A CN114898308A CN 114898308 A CN114898308 A CN 114898308A CN 202210823174 A CN202210823174 A CN 202210823174A CN 114898308 A CN114898308 A CN 114898308A
Authority
CN
China
Prior art keywords
ship
cockpit
outer frame
image
ships
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210823174.2A
Other languages
Chinese (zh)
Other versions
CN114898308B (en
Inventor
邓宏平
高祥
罗杨杨
林德银
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yingjue Technology Co ltd
Original Assignee
Shanghai Yingjue Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yingjue Technology Co ltd filed Critical Shanghai Yingjue Technology Co ltd
Priority to CN202210823174.2A priority Critical patent/CN114898308B/en
Publication of CN114898308A publication Critical patent/CN114898308A/en
Application granted granted Critical
Publication of CN114898308B publication Critical patent/CN114898308B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a ship cockpit position detection method and system based on a deep convolutional neural network, which comprises the following steps: step 1: acquiring an image containing a ship, and detecting the ship from the image through a ship detector to obtain a ship outer frame; step 2: collecting a cockpit sample, training, and then marking a cockpit in the obtained outer frame of the ship; and step 3: training a single-stage target detection algorithm yolov5, and judging the direction of the marked cockpit so as to obtain the ship advancing direction and the position of the cockpit. The cockpit detection process disclosed by the invention is performed in the outer frame of the ship, so that the false detection probability of cockpit detection in a background area can be reduced, and the detection time can be saved.

Description

Ship cockpit position detection method and system based on deep convolutional neural network
Technical Field
The invention relates to the technical field of position detection, in particular to a ship cockpit position detection method and system based on a deep convolutional neural network.
Background
In the transportation process, the water transportation process relates to a large amount of safety management requirements, and video and image monitoring is carried out on ships running in a channel, so that very important help can be provided for the safety of the field. After the monitoring image of the ship is obtained, the position of the ship cockpit is positioned, richer detail information can be provided, and convenience is brought to the search of the ship image. When the ship image information is stored, the structural information of the cockpit position is more beneficial to storage and retrieval.
Patent document CN110097055A (application number: CN 201910358925.6) discloses a vehicle attitude detection method and system based on a grid convolution neural network, the method utilizes an SSD network, adopts a partial connection network to improve a traditional attitude detection network, constructs grid convolution, designs the grid convolution neural network based on the grid convolution, divides data acquired by a vehicle-mounted camera into a training data set and a test data set, trains the grid convolution neural network to generate a plurality of candidate image areas for an image to be detected, and judges whether vehicles exist in the candidate areas through feature extraction and classification of a grid convolution neural network model; and finally, carrying out window fusion on the candidate area in which the vehicle is judged to exist to obtain accurate vehicle target position and attitude information.
However, the conventional method has difficulty in achieving accurate positioning of the cockpit without background interference.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a ship cockpit position detection method and system based on a deep convolutional neural network.
The ship cockpit position detection method based on the deep convolutional neural network provided by the invention comprises the following steps:
step 1: acquiring an image containing a ship, and detecting the ship from the image through a ship detector to obtain a ship outer frame;
step 2: firstly, marking an outer frame area of the cockpit, then training a model special for detecting the cockpit by utilizing a cockpit sample and a yolov5 network, and outputting the model as the outer frame position of the cockpit, wherein the outer frame position comprises an upper left corner X coordinate, an upper left corner Y coordinate, a width and a height;
and step 3: firstly, training a single-stage target detection algorithm yolov5, wherein the single stage is to classify the fixed region of interest and regress the position, and then judging the side surface or the front and back of the ship in the image, so as to obtain the position of the cockpit and the advancing direction of the ship.
Preferably, the step 1 comprises:
acquiring an image containing ships and drawing an outer frame of each ship in the drawing;
and training parameters in the ship detector according to the ship sample until the detection rate meets the preset condition.
Preferably, the step 2 comprises:
acquiring ship images of various situations, including different climates, different visual angles and different types of ships;
the method comprises the steps of intercepting ship areas from original images, then marking outer frame areas of the cockpit, forming each ship into an image, and storing the corresponding cockpit area in a document mode.
Preferably, the step 3 comprises:
judging whether the image is the side surface or the front and back of the ship according to the size of the outer frame of the ship;
judging whether the image is the side surface or the front and back of the ship according to the position relation between the outer frame of the ship and the outer frame of the cab;
when ships are overlapped, the outer frame coordinates of all the ships in the graph are saved, cockpit detection is carried out in the outer frame area of the current ship, if more than 2 cockpit exist, the distance difference between each cockpit coordinate and the vertical coordinate of the current ship central point coordinate is analyzed, and the cockpit with the minimum distance to the current ship belongs to the current ship; comparing the cockpit with other ships when the cockpit is farther away from the current ship;
the side ship is not suitable for the situation, and whether the cockpit is at the front part, the middle part or the rear part is judged according to the relative position relation between the cockpit and the outer frame of the ship: the outer frame of the ship is horizontally divided into a left part, a middle part and a right part, and the center point of the cockpit is located at which part, so that the cockpit is located at the corresponding position.
Preferably, a visible light camera and an infrared camera are simultaneously installed at each detection point, the positions of the detection points and the ship are on the same horizontal line, and the working switching time of the two cameras is set; the position of the cockpit is detected by using a visible light network model in the daytime, and the position of the cockpit is detected by using an infrared network model at night.
The ship cockpit position detection system based on the deep convolutional neural network provided by the invention comprises:
module M1: acquiring an image containing a ship, and detecting the ship from the image through a ship detector to obtain an outer frame of the ship;
module M2: firstly, marking an outer frame area of the cockpit, then training a model special for detecting the cockpit by utilizing a cockpit sample and a yolov5 network, and outputting the model as the outer frame position of the cockpit, wherein the outer frame position comprises an upper left corner X coordinate, an upper left corner Y coordinate, a width and a height;
module M3: firstly, training a single-stage target detection algorithm yolov5, wherein the single stage is to classify the fixed region of interest and regress the position, and then judging the side surface or the front and back of the ship in the image, so as to obtain the position of the cockpit and the advancing direction of the ship.
Preferably, the module M1 includes:
acquiring an image containing ships and drawing an outer frame of each ship in the drawing;
and training parameters in the ship detector according to the ship sample until the detection rate meets the preset condition.
Preferably, the module M2 includes:
acquiring ship images of various situations, including different climates, different visual angles and different types of ships;
the method comprises the steps of intercepting ship areas from original images, then marking outer frame areas of the cockpit, forming each ship into an image, and storing the corresponding cockpit area in a document mode.
Preferably, the module M3 includes:
judging whether the image is the side surface or the front and back of the ship according to the size of the outer frame of the ship;
judging whether the image is the side surface or the front and back of the ship according to the position relation between the outer frame of the ship and the outer frame of the cab;
when ships are overlapped, the outer frame coordinates of all the ships in the graph are saved, cockpit detection is carried out in the outer frame area of the current ship, if more than 2 cockpit exist, the distance difference between each cockpit coordinate and the vertical coordinate of the current ship central point coordinate is analyzed, and the cockpit with the minimum distance to the current ship belongs to the current ship; comparing the cockpit with other ships when the cockpit is farther away from the current ship;
the side ship is not suitable for the situation, and whether the cockpit is at the front part, the middle part or the rear part is judged according to the relative position relation between the cockpit and the outer frame of the ship: the outer frame of the ship is horizontally divided into a left part, a middle part and a right part, and the center point of the cockpit is located at which part, so that the cockpit is located at the corresponding position.
Preferably, a visible light camera and an infrared camera are simultaneously installed at each detection point, the positions of the detection points and the ship are on the same horizontal line, and the working switching time of the two cameras is set; the position of the cockpit is detected by using a visible light network model in the daytime, and the position of the cockpit is detected by using an infrared network model at night.
Compared with the prior art, the invention has the following beneficial effects:
(1) the cockpit position detection method based on the detector can effectively reduce the influence of background interference through a deep learning network;
(2) the invention considers the influence of the appearance difference of the ship seen at different visual angles on the cockpit, the side surface is most easy to distinguish the position of the cockpit, but sometimes the shot images of the ship are shot from the front, the back or obliquely, and the position of the cockpit is not easy to judge at the moment;
(3) the method considers the influence on the position of the cockpit when the positions of two ships are close to form overlapping, when the two ships are close to each other, a large part of the other ship can possibly appear in a visible area of the current ship, at the moment, the detection of the cockpit of each of the two ships can possibly overlap to cause that the ship can not be judged to belong to, and the method also carries out special treatment on the condition;
(4) the ship direction is judged, and after the ship advancing direction is obtained, the ship historical track is conveniently obtained; if the illegal boat is the illegal boat, the illegal boat can be conveniently caught.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic drawing of a ship's outer frame;
FIG. 2 is a cockpit position annotation schematic;
FIG. 3 is a schematic view of the bow or stern of a ship;
fig. 4 is an overlay of the ship.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Example (b):
the invention provides a ship cockpit position detection method based on a deep convolutional neural network, which comprises the following steps:
1. detecting the ship from the image by using a ship detector to obtain an outer frame of the ship;
the ship detector is a yolov5 network, and the ship detector and the cockpit detector are two different models and need to be trained and used independently. Both the ship detector and the cockpit detector are trained by using a deep neural network, which comprises a plurality of layers, and each layer has a large number of parameters for recording model details.
The details of this step are as follows:
1) the naval vessel sample is collected in the manual work to the frame mark out of each naval vessel, and the image (five thousand at least) that contain the naval vessel are collected to the wholesale, and the instrument such as LabelImg is utilized to the label personnel, draws out the frame of each naval vessel in the figure, and the frame requires just to include all parts of naval vessel into. The marking information is stored in a document form and is used in the subsequent training process;
2) fully training parameters in the yolov5 network by adopting collected ship samples until the detection rate meets the requirement;
training process: marking the collected sample with the position of an outer frame, and then giving a category label; taking Ship detection as an example, all the class labels are 'Ship', then each layer of parameters of the neural network are trained by using an error reverse transmission BP algorithm, and iteration is repeated until the network can reach a preset detection rate (such as 99%) on a sample set; the training input is a marked big picture with a ship, and position numerical values (the X coordinate of the upper left corner point of the minimum circumscribed rectangle of the ship in the picture, the Y coordinate of the upper left corner point, the width and the height) and category information;
3) ship detection is performed in the image by using the trained network model to obtain the position of each ship, as shown in fig. 1.
The cockpit detection process disclosed by the invention is performed in the outer frame of the ship, so that the false detection probability of cockpit detection in a background area can be reduced, and the detection time can be saved.
2. Collecting cockpit samples
1) Collection of ship images in various situations
In order to maintain the diversity of samples, when ship images are collected, the following various scenes need to be considered so as to improve the detection effect of the cockpit:
various climates: the visibility of the cockpit is low in the dark in the evening, the texture information of the cockpit in the infrared image at night is less, the contrast of the ship is low in the foggy day, the contrast of the ship is low in the rainy day, and the whole ship is painted black in the backlight;
various viewing angles: lateral, forward, rearward, oblique viewing angles, etc.;
shielding and overlapping: ships do not appear completely in the field of view; the local part of the ship is shielded by buildings, trees or other ships at the bank; the two ships are very close to each other and appear in the outer frames of each other;
various types of ships: passenger ships, cargo ships, container ships, sand ships, yachts, fishing ships, sampans, law enforcement boats, and the like.
2) Marking the position of the cockpit;
as shown in fig. 2, firstly, the ship area is cut out from the original image, then, on the image, the outer frame area of the cockpit is marked, as shown by the rectangle in the image, each ship forms an image, the corresponding cockpit area is stored in a document form, and the number of cockpit samples is at least five thousand.
3. Training a yolov5 network as a cockpit detector;
and (3) training a model special for detecting the cockpit (the output is the position of the outer frame of the cockpit, namely the X coordinate of the upper left corner point, the Y coordinate of the upper left corner point, the width and the height) by using the cockpit sample collected in the step (2) and the yolov5 deep network until the detection result of the cockpit meets the requirement.
4. Firstly, training a single-stage target detection algorithm yolov5, wherein the single stage is to classify fixed regions of interest and regress the classification and position, and then judging whether the image is the side surface or the front and back of a ship, so that the position of a cockpit and the advancing direction of the ship are obtained, and after the advancing direction of the ship is obtained, the historical track of the ship is conveniently obtained; if the illegal boat is the illegal boat, the illegal boat can be conveniently caught.
For the current ship, after the detection of the position of the cockpit (the position of the outer frame of the cockpit: the X coordinate of the upper left corner point, the Y coordinate of the upper left corner point, the width and the height) is completed, the following method is required to be utilized for processing:
1) according to the outer frame size of the ship, judging whether the outer frame is a side surface or a front and back surface: if the width-height ratio of the outer frame of the ship is close to 1, the outer frame is certainly forward or backward, and in this case, the position of the cockpit is difficult to judge from the image, and the image needs to be selected and judged manually.
2) According to the position relation of the outer frame of the ship and the outer frame of the cab, which direction is judged: as shown in fig. 3, the cockpit occupies most of the area of the ship, and in this case, it is only the front or the back, and in this case, it is impossible to determine from the image, and the image needs to be selected and determined manually.
3) Not conform to above-mentioned two kinds of situations, be exactly the side ship, according to the relative position relation of cockpit and naval vessel frame, judge that the cockpit is preceding, middle part, whether the rear: the outer frame of the ship is horizontally divided into a left part, a middle part and a right part, and the position of the cockpit (the front part, the middle part and the rear part of the ship) is judged according to which part the central point of the cockpit is positioned.
5. The situation of ship overlap;
when the situation as shown in fig. 4 occurs, it is likely that the cockpit of one of the vessels is mistaken for the other vessel, resulting in a position location error.
The following should be done at this time to reduce interference:
(1) saving the outer frame coordinates of all ships in the graph;
(2) carrying out cockpit detection in the outer frame area of the current ship, wherein more than 2 cockpit exist;
(3) analyzing the distance difference from each cockpit coordinate to the vertical coordinate of the current ship center point coordinate;
(4) the cockpit with the minimum distance from the current ship is determined to belong to the current ship;
(5) the cockpit which is farther away from the current ship needs to be compared with other ships to judge the cockpit as a closer ship;
if after the above processing the current vessel still has two cockpit, which may be the case for dual cockpit, this information is saved to the database.
6. Switching between infrared and visible light
Each monitoring point is simultaneously provided with a visible light camera and an infrared camera. Both work at different times of day and night, so it is necessary to set the work switching time of the two cameras: before dark, such as 5 o 'clock in the afternoon, the infrared camera starts to be started, and until the morning is completely bright, such as 7 o' clock, the infrared camera is not turned off, and the visible light camera is turned on. The visible light camera works to 5 pm and is continuously switched into the infrared camera.
Two different yolov5 network models were trained. For the daytime situation, the detection of the cockpit position is performed using a visible light network model. And at night, the infrared network model is used for detection. In this way, all-weather 24-hour real-time detection can be realized.
The ship cockpit position detection system based on the deep convolutional neural network provided by the invention comprises: module M1: acquiring an image containing a ship, and detecting the ship from the image through a ship detector to obtain a ship outer frame; module M2: collecting a cockpit sample, training, and then marking a cockpit in the obtained outer frame of the ship; module M3: training a single-stage target detection algorithm yolov5, and judging the direction of the marked cockpit so as to obtain the ship advancing direction and the position of the cockpit.
The module M1 includes: obtaining ship samples and marking the outer frame of each ship; acquiring an image containing ships and drawing an outer frame of each ship in the drawing; and training parameters in the ship detector according to the ship sample until the detection rate meets the preset condition. The module M2 includes: acquiring ship images of various situations, including different climates, different visual angles and different types of ships; the method comprises the steps of intercepting ship areas from original images, then marking outer frame areas of the cockpit, forming each ship into an image, and storing the corresponding cockpit area in a document mode. The module M3 includes: judging whether the image is the side surface or the front and back of the ship according to the size of the outer frame of the ship; judging whether the image is the side surface or the front and back of the ship according to the position relation between the outer frame of the ship and the outer frame of the cab; when ships are overlapped, the outer frame coordinates of all the ships in the graph are saved, cockpit detection is carried out in the outer frame area of the current ship, if more than 2 cockpit exist, the distance difference between each cockpit coordinate and the vertical coordinate of the current ship central point coordinate is analyzed, and the cockpit with the minimum distance to the current ship belongs to the current ship; and comparing the cockpit with the current ship at a longer distance with other ships. Simultaneously installing a visible light camera and an infrared camera at each detection point, and setting the work switching time of the two cameras; and detecting the position of the cockpit by using a visible light network model in the daytime and detecting by using an infrared network model at night.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A ship cockpit position detection method based on a deep convolutional neural network is characterized by comprising the following steps:
step 1: acquiring an image containing a ship, and detecting the ship from the image through a ship detector to obtain a ship outer frame;
step 2: firstly, marking an outer frame area of the cockpit, then training a model special for detecting the cockpit by utilizing a cockpit sample and a yolov5 network, and outputting the model as the outer frame position of the cockpit, wherein the outer frame position comprises an upper left corner X coordinate, an upper left corner Y coordinate, a width and a height;
and step 3: firstly, training a single-stage target detection algorithm yolov5, wherein the single stage is to classify the fixed region of interest and regress the position, and then judging the side surface or the front and back of the ship in the image, so as to obtain the position of the cockpit and the advancing direction of the ship.
2. The method for detecting the position of the ship cockpit based on the deep convolutional neural network as claimed in claim 1, wherein the step 1 comprises:
acquiring an image containing ships and drawing an outer frame of each ship in the drawing;
and training parameters in the ship detector according to the ship sample until the detection rate meets the preset condition.
3. The method for detecting the position of the ship cockpit based on the deep convolutional neural network as claimed in claim 1, wherein the step 2 comprises:
acquiring ship images of various situations, including different climates, different visual angles and different types of ships;
the method comprises the steps of intercepting ship areas from original images, then marking outer frame areas of the cockpit, forming each ship into an image, and storing the corresponding cockpit area in a document mode.
4. The method for detecting the position of the ship cockpit based on the deep convolutional neural network as claimed in claim 1, wherein the step 3 comprises:
judging whether the image is the side surface or the front and back of the ship according to the size of the outer frame of the ship;
judging whether the image is the side surface or the front and back of the ship according to the position relation between the outer frame of the ship and the outer frame of the cab;
when ships are overlapped, the outer frame coordinates of all the ships in the graph are saved, cockpit detection is carried out in the outer frame area of the current ship, if more than 2 cockpit exist, the distance difference between each cockpit coordinate and the vertical coordinate of the current ship central point coordinate is analyzed, and the cockpit with the minimum distance to the current ship belongs to the current ship; comparing the cockpit with other ships when the cockpit is farther away from the current ship;
the side ship is not suitable for the situation, and whether the cockpit is at the front part, the middle part or the rear part is judged according to the relative position relation between the cockpit and the outer frame of the ship: the outer frame of the ship is horizontally divided into a left part, a middle part and a right part, and the center point of the cockpit is located at which part, so that the cockpit is located at the corresponding position.
5. The method for detecting the position of the ship cockpit based on the deep convolutional neural network as claimed in claim 1, wherein a visible light camera and an infrared camera are simultaneously installed at each detection point, the detection point position and the ship are on the same horizontal line, and the work switching time of the two cameras is set; the position of the cockpit is detected by using a visible light network model in the daytime, and the position of the cockpit is detected by using an infrared network model at night.
6. A naval vessel cockpit position detection system based on deep convolutional neural network, characterized by includes:
module M1: acquiring an image containing a ship, and detecting the ship from the image through a ship detector to obtain a ship outer frame;
module M2: firstly, marking an outer frame area of the cockpit, then training a model special for detecting the cockpit by utilizing a cockpit sample and a yolov5 network, and outputting the model as the outer frame position of the cockpit, wherein the outer frame position comprises an upper left corner X coordinate, an upper left corner Y coordinate, a width and a height;
module M3: firstly, training a single-stage target detection algorithm yolov5, wherein the single stage is to classify the fixed region of interest and regress the position, and then judging the side surface or the front and back of the ship in the image, so as to obtain the position of the cockpit and the advancing direction of the ship.
7. The deep convolutional neural network-based ship cockpit position detection system of claim 6, wherein the module M1 comprises:
acquiring an image containing ships and drawing an outer frame of each ship in the drawing;
and training parameters in the ship detector according to the ship sample until the detection rate meets the preset condition.
8. The deep convolutional neural network-based ship cockpit position detection system of claim 6, wherein the module M2 comprises:
acquiring ship images of various situations, including different climates, different visual angles and different types of ships;
the method comprises the steps of intercepting ship areas from original images, then marking outer frame areas of the cockpit, forming each ship into an image, and storing the corresponding cockpit area in a document mode.
9. The deep convolutional neural network-based ship cockpit position detection system of claim 6, wherein the module M3 comprises:
judging whether the image is the side surface or the front and back of the ship according to the size of the outer frame of the ship;
judging whether the image is the side surface or the front and back of the ship according to the position relation between the outer frame of the ship and the outer frame of the cab;
when ships are overlapped, the outer frame coordinates of all the ships in the graph are saved, cockpit detection is carried out in the outer frame area of the current ship, if more than 2 cockpit exist, the distance difference between each cockpit coordinate and the vertical coordinate of the current ship central point coordinate is analyzed, and the cockpit with the minimum distance to the current ship belongs to the current ship; comparing the cockpit with other ships when the cockpit is farther away from the current ship;
the side ship is not suitable for the situation, and whether the cockpit is at the front part, the middle part or the rear part is judged according to the relative position relation between the cockpit and the outer frame of the ship: the outer frame of the ship is horizontally divided into a left part, a middle part and a right part, and the center point of the cockpit is located at which part, so that the cockpit is located at the corresponding position.
10. The ship cockpit position detection system based on the deep convolutional neural network of claim 6, wherein a visible light camera and an infrared camera are installed at each detection point simultaneously, the detection point position is on the same horizontal line with the ship, and the work switching time of the two cameras is set; the position of the cockpit is detected by using a visible light network model in the daytime, and the position of the cockpit is detected by using an infrared network model at night.
CN202210823174.2A 2022-07-14 2022-07-14 Ship cockpit position detection method and system based on deep convolutional neural network Active CN114898308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210823174.2A CN114898308B (en) 2022-07-14 2022-07-14 Ship cockpit position detection method and system based on deep convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210823174.2A CN114898308B (en) 2022-07-14 2022-07-14 Ship cockpit position detection method and system based on deep convolutional neural network

Publications (2)

Publication Number Publication Date
CN114898308A true CN114898308A (en) 2022-08-12
CN114898308B CN114898308B (en) 2022-09-20

Family

ID=82729938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210823174.2A Active CN114898308B (en) 2022-07-14 2022-07-14 Ship cockpit position detection method and system based on deep convolutional neural network

Country Status (1)

Country Link
CN (1) CN114898308B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8170272B1 (en) * 2010-02-23 2012-05-01 The United States Of America As Represented By The Secretary Of The Navy Method for classifying vessels using features extracted from overhead imagery
CN107609601A (en) * 2017-09-28 2018-01-19 北京计算机技术及应用研究所 A kind of ship seakeeping method based on multilayer convolutional neural networks
CN109299671A (en) * 2018-09-04 2019-02-01 上海海事大学 A kind of tandem type is by slightly to the convolutional neural networks Ship Types recognition methods of essence
CN110516605A (en) * 2019-08-28 2019-11-29 北京观微科技有限公司 Any direction Ship Target Detection method based on cascade neural network
CN111539149A (en) * 2020-04-29 2020-08-14 重庆交通大学 Ship model building and modal analysis method
CN112380914A (en) * 2020-10-21 2021-02-19 浙江工业大学 Fishing boat safety monitoring method based on deep learning
CN114049624A (en) * 2021-11-17 2022-02-15 中科芯集成电路有限公司 Intelligent detection method and system for ship cabin based on machine vision

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8170272B1 (en) * 2010-02-23 2012-05-01 The United States Of America As Represented By The Secretary Of The Navy Method for classifying vessels using features extracted from overhead imagery
CN107609601A (en) * 2017-09-28 2018-01-19 北京计算机技术及应用研究所 A kind of ship seakeeping method based on multilayer convolutional neural networks
CN109299671A (en) * 2018-09-04 2019-02-01 上海海事大学 A kind of tandem type is by slightly to the convolutional neural networks Ship Types recognition methods of essence
CN110516605A (en) * 2019-08-28 2019-11-29 北京观微科技有限公司 Any direction Ship Target Detection method based on cascade neural network
CN111539149A (en) * 2020-04-29 2020-08-14 重庆交通大学 Ship model building and modal analysis method
CN112380914A (en) * 2020-10-21 2021-02-19 浙江工业大学 Fishing boat safety monitoring method based on deep learning
CN114049624A (en) * 2021-11-17 2022-02-15 中科芯集成电路有限公司 Intelligent detection method and system for ship cabin based on machine vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JOSEPH REDMON ET AL.: ""YOLO9000: Better, Faster, Stronger"", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
钱坤 等: ""基于YOLOv5的舰船目标及关键部位检测算法"", 《***工程与电子技术》 *

Also Published As

Publication number Publication date
CN114898308B (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN109816024B (en) Real-time vehicle logo detection method based on multi-scale feature fusion and DCNN
CN110197589B (en) Deep learning-based red light violation detection method
Alessandretti et al. Vehicle and guard rail detection using radar and vision data fusion
CN106205163B (en) Mountain-area road-curve sight blind area meeting early warning system based on panoramic shooting technology
CN109747638A (en) A kind of vehicle driving intension recognizing method and device
Mu et al. Multiscale edge fusion for vehicle detection based on difference of Gaussian
CN109255350A (en) A kind of new energy detection method of license plate based on video monitoring
CN111553214B (en) Method and system for detecting smoking behavior of driver
Naufal et al. Preprocessed mask RCNN for parking space detection in smart parking systems
CN111325061B (en) Vehicle detection algorithm, device and storage medium based on deep learning
CN106203267A (en) Vehicle collision avoidance method based on machine vision
CN109671090A (en) Image processing method, device, equipment and storage medium based on far infrared
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN108550274A (en) A kind of unmanned auxiliary device of traffic lights based on Faster RCNN and method
CN111488808A (en) Lane line detection method based on traffic violation image data
CN111080613A (en) Image recognition method for damage fault of wagon bathtub
CN115424217A (en) AI vision-based intelligent vehicle identification method and device and electronic equipment
CN113239854A (en) Ship identity recognition method and system based on deep learning
Špoljar et al. Lane detection and lane departure warning using front view camera in vehicle
CN111444916A (en) License plate positioning and identifying method and system under unconstrained condition
Joy et al. Real time road lane detection using computer vision techniques in python
Li et al. A low-cost and fast vehicle detection algorithm with a monocular camera for adaptive driving beam systems
CN109558877B (en) KCF-based offshore target tracking algorithm
CN114898308B (en) Ship cockpit position detection method and system based on deep convolutional neural network
CN112307943B (en) Water area man-boat target detection method, system, terminal and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant