CN111241948A - Method and system for identifying ship in all weather - Google Patents
Method and system for identifying ship in all weather Download PDFInfo
- Publication number
- CN111241948A CN111241948A CN202010001169.4A CN202010001169A CN111241948A CN 111241948 A CN111241948 A CN 111241948A CN 202010001169 A CN202010001169 A CN 202010001169A CN 111241948 A CN111241948 A CN 111241948A
- Authority
- CN
- China
- Prior art keywords
- ship
- image
- determining
- images
- around
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000013135 deep learning Methods 0.000 claims abstract description 22
- 238000011176 pooling Methods 0.000 claims description 12
- 230000007613 environmental effect Effects 0.000 claims description 11
- 238000012545 processing Methods 0.000 abstract description 6
- 230000006870 function Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 12
- 230000004913 activation Effects 0.000 description 10
- 238000005286 illumination Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 6
- 238000012549 training Methods 0.000 description 6
- 238000007477 logistic regression Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000012535 impurity Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007634 remodeling Methods 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
Abstract
The disclosure provides a method and a system for identifying ships in all weather, and belongs to the technical field of image processing. The method comprises the following steps: acquiring a plurality of images around a first ship, wherein the shooting time of the plurality of images is different; identifying the ship in each image by adopting a deep learning algorithm to obtain the position of a second ship in each image; determining the danger level of the second ship according to the position of the second ship in each image; an alarm is issued based on the hazard level of the second vessel. The position of a second ship around a first ship in each image can be obtained by acquiring a plurality of images shot at different times around the first ship and identifying the ship in each image by adopting a depth learning algorithm. According to the position of the second ship in each image, the danger level of the second ship can be obtained, and an alarm is given according to the danger level of the second ship, so that the first ship can be prevented from colliding with the second ship, and the safety of the ship is guaranteed.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and a system for identifying a ship in all weather.
Background
The transportation speed of the sea transportation is slower than that of the air transportation, but the single transportation amount is far larger than that of the air transportation, the adaptability to the goods is better, and the transportation cost is lower than that of the air transportation, so the sea transportation becomes the main transportation mode of the international trade. With the continuous expansion of the shipping scale, a large number of ships enter and leave ports of large port cities and shipping hub cities every day, so that the port areas are crowded, and the potential hazards of collision among the ships exist. Moreover, more and more ultra-large container cargo ships are put into use, and the safety problem of marine transportation is gradually highlighted.
In the daytime with better visibility, a port pilot can command the ship to enter the port in order, and patrolmen on a ship bridge can observe the surrounding ships, so that the safety of the ship can be guaranteed. However, at night with low visibility, the method is not used any more, and the safety of the ship cannot be guaranteed. Although radar can scan out surrounding ships, radar is very expensive and cannot be popularized on civil cargo ships.
Disclosure of Invention
The embodiment of the disclosure provides a method and a system for identifying ships in all weather, which can effectively identify surrounding ships and ensure the safety of the ships by using image acquisition equipment with lower cost and matching with an image processing technology, and is particularly suitable for civil cargo ships without popularized radars. The technical scheme is as follows:
in one aspect, an embodiment of the present disclosure provides an all-weather ship identification method, where the method includes:
acquiring a plurality of images around a first ship, wherein the shooting time of the plurality of images is different;
identifying ships in each image by adopting a deep learning algorithm to obtain the position of a second ship in each image;
determining the danger level of the second ship according to the position of the second ship in each image;
and sending an alarm according to the danger level of the second ship.
Optionally, the acquiring a plurality of images around the first vessel includes:
acquiring environmental information around the first ship;
determining the visibility level around the first ship according to the environment information;
when the visibility grade reaches a set standard, controlling a camera to continuously shoot a plurality of images around the first ship;
and when the visibility grade does not reach the set standard, controlling the laser holder to continuously shoot a plurality of images around the first ship.
Optionally, the identifying a second ship in each of the images by using a deep learning algorithm includes:
extracting image features from the image;
generating a candidate region according to the image characteristics;
extracting region features according to the image features and the candidate regions;
and determining the category of the candidate region according to the region feature to obtain the position of the second ship in the image.
Optionally, the determining the danger level of the second ship according to the position of the second ship in each image comprises:
determining the distance between the shooting time of the second ship in the image and the first ship according to the position of the second ship in the image;
determining the course and the navigational speed of the second ship according to the shooting time of the second ship on each image and the distance between the first ship and the second ship;
and determining the danger level of the second ship according to the course, the navigational speed and the distance between the second ship and the first ship.
Optionally, said issuing an alert according to the hazard level of the second vessel comprises:
and outputting the course, the navigational speed and the distance between the second ship and the first ship in turn according to the sequence of the danger levels from high to low to give an alarm.
In another aspect, an embodiment of the present disclosure provides an all-weather ship identification system, where the system includes:
the acquisition module is used for acquiring a plurality of images around a first ship, and the shooting time of the plurality of images is different;
the identification module is used for identifying ships in each image by adopting a deep learning algorithm to obtain the position of a second ship in each image;
the grading module is used for determining the danger level of the second ship according to the position of the second ship in each image;
and the alarm module is used for sending out an alarm according to the danger level of the second ship.
Optionally, the obtaining module includes:
the information acquisition submodule is used for acquiring environmental information around the first ship;
the level determining submodule is used for determining the visibility level around the first ship according to the environment information;
the shooting control submodule is used for controlling a camera to continuously shoot a plurality of images around the first ship when the visibility grade reaches a set standard; and when the visibility grade does not reach the set standard, controlling the laser holder to continuously shoot a plurality of images around the first ship.
Optionally, the identification module comprises:
a convolutional layer for extracting image features from the image;
the area suggestion network is used for generating a candidate area according to the image characteristics;
the pooling layer is used for extracting region features according to the image features and the candidate regions;
and the classifier is used for determining the category of the candidate region according to the region characteristics to obtain the position of the second ship in the image.
Optionally, the grading module comprises:
the first determining submodule is used for determining the distance between the shooting time of the second ship in the image and the first ship according to the position of the second ship in the image;
the second determining submodule is used for determining the course and the speed of the second ship according to the distance between the shooting time of the second ship in each image and the first ship;
and the third determining submodule is used for determining the danger level of the second ship according to the course, the navigational speed and the distance between the second ship and the first ship.
Optionally, the alarm module is configured to,
and outputting the course, the navigational speed and the distance between the second ship and the first ship in turn according to the sequence of the danger levels from high to low to give an alarm.
The technical scheme provided by the embodiment of the disclosure has the following beneficial effects:
the positions of the second ships around the first ship in the images can be obtained by acquiring a plurality of images shot at different times around the first ship and identifying the ships in the images by adopting a deep learning algorithm. According to the position of the second ship in each image, the danger level of the second ship can be obtained, and an alarm is given according to the danger level of the second ship, so that the first ship can be prevented from colliding with the second ship, and the safety of the ship is guaranteed. And the prices of the image acquisition equipment and the image processing equipment are far lower than that of the radar, so that the realization cost of the ship identification system can be greatly reduced, and the method is particularly suitable for civil cargo ships without the popularization of the radar.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a diagram of an application scenario of a method for identifying a ship in all weather according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method for identifying a ship around the clock provided by the disclosed embodiments;
FIG. 3 is a flow chart of another method for identifying a vessel at all weather provided by the disclosed embodiments;
FIG. 4 is a schematic structural diagram of a fast RCNN algorithm model provided by an embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of a convolutional layer provided in an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a regional suggestion network provided by an embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of a classifier provided by an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a system for identifying a ship all weather according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 1 is an application scenario diagram of a method for identifying a ship in all weather according to an embodiment of the present disclosure. Referring to fig. 1, a plurality of second vessels 20 are around a first vessel 10, and the plurality of second vessels 20 all have a risk of collision with the first vessel 10, requiring the first vessel 10 to identify and pay attention to avoid. Since the plurality of second vessels 20 are different in heading, speed, and distance from the first vessel 10, the probability of collision between the plurality of second vessels 20 and the first vessel 10 is different.
The embodiment of the disclosure provides a method for identifying a ship in all weather. Fig. 2 is a flowchart of a method for identifying a ship in all weather according to an embodiment of the present disclosure. Referring to fig. 2, the method includes:
step 101: a plurality of images around the first ship are obtained, and shooting time of the plurality of images is different.
In this embodiment, the first vessel is a vessel that needs to identify surrounding vessels to avoid a collision. The periphery of the first ship is an area with a set distance (for example, 15km) as a radius by taking the first ship as a center of a circle.
Step 102: and identifying the ship in each image by adopting a deep learning algorithm to obtain the position of the second ship in each image.
In this embodiment, the second vessel is a vessel located around the first vessel.
Step 103: and determining the danger level of the second ship according to the position of the second ship in each image.
Step 104: an alarm is issued based on the hazard level of the second vessel.
According to the embodiment of the disclosure, the positions of the second ships around the first ship in the images can be obtained by acquiring a plurality of images shot at different times around the first ship and identifying the ships in the images by adopting a depth learning algorithm. According to the position of the second ship in each image, the danger level of the second ship can be obtained, and an alarm is given according to the danger level of the second ship, so that the first ship can be prevented from colliding with the second ship, and the safety of the ship is guaranteed. And the prices of the image acquisition equipment and the image processing equipment are far lower than that of the radar, so that the realization cost of the ship identification system can be greatly reduced, and the method is particularly suitable for civil cargo ships without the popularization of the radar.
The disclosed embodiment provides another all-weather ship identification method, which is an optional implementation manner of the all-weather ship identification method shown in fig. 2. Fig. 3 is a flowchart of another all-weather ship identification method according to an embodiment of the disclosure. Referring to fig. 3, the method includes:
step 201: environmental information around a first vessel is acquired.
In this embodiment, the environmental information may include illumination intensity and humidity. The intensity of illumination can directly influence the visibility, and the intensity of humidity can influence the formation of fog and the diffusion of impurities such as dust, and indirectly influence the visibility height. Therefore, the illumination intensity and the humidity are obtained as the environmental information, and the visibility grade can be accurately determined.
Optionally, the step 201 may include:
measuring the illumination intensity around the first ship by using a light sensor;
the humidity around the first vessel is measured using one of a humidity sensor, a moisture meter, and a weather radar.
Step 202: according to the environmental information, a visibility level around the first vessel is determined.
In this embodiment, the visibility level may include non-harsh environments and harsh environments. In practical application, whether the set standard is met or not can be directly adopted to divide the visibility level. The visibility grade reaches a set standard, and the visibility is higher at the moment and belongs to a non-severe environment; the visibility grade does not reach the set standard, and the visibility is low at the moment and belongs to a severe environment.
Optionally, this step 202 may include:
when the illumination intensity around the first ship is higher than the set intensity and the humidity around the first ship is lower than the set humidity, determining that the visibility level around the first ship reaches the set standard;
and when the illumination intensity around the first ship is below the set intensity or the humidity around the first ship is above the set humidity, determining that the visibility level around the first ship does not reach the set standard.
When the illumination intensity and the humidity meet the requirements at the same time, the visibility grade is judged to meet the set standard, the requirement on the visibility grade is high, and the definition of images shot by the camera is guaranteed.
For example, the set intensity may be 100lux, and the set humidity may be 50%; the setting criteria may be an illumination intensity of 100lux or more and a humidity of 50% or less.
Step 203: when the visibility grade reaches a set standard, the camera is controlled to continuously shoot a plurality of images around the first ship, and shooting time of the images is different.
In practical application, the camera can be a wide-angle camera to meet the shooting requirements of different areas.
Step 204: and when the visibility grade does not reach the set standard, controlling the laser holder to continuously shoot a plurality of images around the first ship, wherein the shooting time of the plurality of images is different.
In this embodiment, the steps 201, 202 and 203 are performed in sequence, or the steps 201, 202 and 204 are performed in sequence, so that a plurality of images around the first ship can be acquired.
In practical application, the above process can be automatically controlled by the equipment, and the image acquisition equipment can be manually switched to capture the image around the first ship. The image capturing devices may be arranged on the mast of the first vessel or around the hull of the first vessel, e.g. at least one image capturing device per side. For a first vessel with a long hull, a plurality of image capturing devices may be provided at regular intervals laterally.
The embodiment of the disclosure determines the visibility level around the first ship by acquiring the environmental information around the first ship, and controls the laser holder which is applicable to a severe environment to shoot the images around the first ship when the visibility around the first ship is low according to whether the visibility level reaches a set standard, so that the ship can be identified by the definition of the shot images, the camera is controlled to shoot the images around the first ship when the visibility around the first ship is high, and the laser holder is prevented from working for a long time under the condition that the ship can be identified by the definition of the shot images, thereby prolonging the service life of the laser holder.
Step 205: and identifying the ship in each image by adopting a deep learning algorithm to obtain the position of the second ship in each image. This step 205 is performed after step 203 or step 204.
In the embodiment, a fast local convolutional neural network (fast RCNN) algorithm can be adopted to identify the ship in the image, so that the detection speed is effectively increased, collision threats can be found in time to give an alarm, and the safety of the ship is guaranteed.
Optionally, this step 205 may include:
extracting image features from the image;
generating a candidate region according to the image characteristics;
extracting region features according to the image features and the candidate regions;
and determining the category of the candidate region according to the region characteristics to obtain the position of the second ship in the image.
The steps can be realized on the basis of the conventional fast RCNN algorithm, and the realization is more convenient.
Fig. 4 is a schematic structural diagram of a fast RCNN algorithm model provided in the embodiment of the present disclosure. Referring to fig. 4, the model of the FasterRCNN algorithm includes convolutional layers (english: conv layers)31, a regional suggestion network (PRN) 32, a pooling layer (roi) 33, and a classifier (classification) 34. The convolutional layer 31 extracts image feature information (image feature maps) from the image 30, and outputs the extracted image feature information to the area recommendation network 32 and the pooling layer 33, respectively. The region suggestion network 32 generates candidate regions (in english) based on the image features and outputs the candidate regions to the pooling layer 33. The pooling layer 33 extracts feature information (english: positive feature maps) of the candidate region from the feature information of the image and the candidate region, and outputs the feature information to the classifier 34. The classifier 34 determines the class of the candidate region, and thus the position of the object in the image, based on the feature information of the candidate region.
Fig. 5 is a schematic structural diagram of a convolutional layer provided in the embodiment of the present disclosure. Referring to fig. 5, convolutional layer 31 may include a plurality of convolution kernels 41, a plurality of activation functions (relu) 42, and a plurality of pooling layers 43. The convolution kernel 41 may perceive local features; the activation function 42 may increase the nonlinearity of the neural network model; pooling layer 43 may aggregate statistics on the features. Illustratively, as shown in fig. 5, the convolutional layer 31 may include 13 convolution kernels 41, 13 activation functions 42, and 4 pooling layers 43, which are, in order of processing of the image, a convolution kernel 41, an activation function 42, a pooling layer 43, a convolution kernel 41, an activation function 42, a pooling layer 43, a convolution kernel 41, an activation function 42, a convolution kernel 41, and an activation function 42.
Fig. 6 is a schematic structural diagram of a regional recommendation network provided in the embodiment of the present disclosure. Referring to fig. 6, the area recommendation network 32 first processes the data using the convolution kernel 41 and the activation function 42, and generates a large number of candidate frames (english: anchors) and then divides the frames into two paths: after one path passes through a convolution kernel 41, a softmax logistic regression function 44 is used for classifying, whether a candidate frame belongs to a foreground (English) or a background (English) is judged, and a remodeling layer (English) 45 is arranged before and after the softmax logistic regression function so as to facilitate the classification of the softmax logistic regression function; after the other path passes through the convolution kernel 41, the offset of the candidate frame is calculated by frame regression (english: bounding box regression). The suggestion layer (english) 46 obtains a candidate region based on the classification result and the offset of the candidate frame.
Fig. 7 is a schematic structural diagram of a classifier provided in the embodiment of the present disclosure. Referring to fig. 7, the classifier 34 may include a fully connected layer 47, and the fully connected layer 47 may establish connection of each neuron of the upper layer with all neurons of the lower layer. Illustratively, as shown in fig. 7, the classifier 34 is divided into two paths after being processed by the full connection layer 47, the activation function 42, the full connection layer 47, and the activation function 42 in sequence: one path is output as a result after passing through the full connection layer 47; after the other path passes through the full connection layer 47, classification is performed by using a softmax logistic regression function 44, and the offset of the candidate frame is calculated through frame regression so as to improve the precision of the candidate area.
Optionally, before step 205, the method may further include:
acquiring a plurality of images of the position of the marked ship;
and training the model of the deep learning algorithm by adopting a plurality of images marked with the positions of the ships.
In practical application, a back propagation algorithm can be adopted to iteratively update parameters in the deep learning algorithm model until the iteration times reach the set times or the loss function is in the set range.
Optionally, after step 205, the method may further include:
storing a plurality of images around a first vessel;
and updating the model of the deep learning algorithm according to the plurality of images around the first ship.
And training the deep learning algorithm again by using the image acquired in the actual application process, so that the updated deep learning algorithm is more in line with the actual situation, and the recognition accuracy is improved.
In practical applications, a plurality of images can be stored in an industrial hard disk to work in severe environments such as high salinity, high humidity, high temperature or low temperature, and massive data can be stored.
In practical applications, the process of training the model of the deep learning algorithm by using a plurality of images around the first ship is similar to the process of training the model of the deep learning algorithm by using a plurality of images of the marked ship, and the details are not described herein.
Optionally, the method may further include:
a deep learning algorithm is employed to determine the type of the second vessel.
The type of the ship can be determined while the ship is recognized in the image by adopting a deep learning algorithm, and the judgment is not required to be manually carried out according to the shape of an obstacle scanned by a radar.
In practical applications, the hazard level of the second vessel may also be determined according to the type of the second vessel. For example, the second vessel is sequenced in order of tonnage from large to small, with all other parameters being equal.
Step 206: and determining the danger level of the second ship according to the position of the second ship in each image.
Optionally, this step 206 may include:
the method comprises the following steps that firstly, the distance between the shooting time of a second ship in an image and a first ship is determined according to the position of the second ship in the image;
secondly, determining the course and the navigational speed of the second ship according to the shooting time of the second ship in each image and the distance between the first ship and the second ship;
and thirdly, determining the danger level of the second ship according to the course, the navigational speed and the distance between the first ship and the second ship.
According to the position of the second ship in each image, the course, the navigational speed and the distance between the second ship and the first ship can be obtained, the collision probability of the second ship and the first ship is further analyzed, the obtained danger level of the second ship accords with the actual situation, and the accuracy is high.
In practical applications, the first step may include:
obtaining the distance between the second ship and the first ship in the image according to the position of the second ship in the image;
and obtaining the actual distance between the second ship and the first ship according to the distance in the image and the scaling of the image.
For example, the distance between the second vessel and the first vessel in the image is 10cm, and the scale of the image is 1: 10000, the actual distance between the second vessel and the first vessel is 1 km.
Illustratively, the second step may include:
when the distance between the second ship and the first ship is reduced, determining that the course of the second ship faces the first ship;
when the distance between the second vessel and the first vessel increases, determining that the heading of the second vessel deviates from the first vessel.
For example, if the distance between the second vessel and the first vessel decreases from 10km to 5km, the heading of the second vessel is towards the first vessel; for another example, if the distance between the second vessel and the first vessel increases from 5km to 10km, the heading of the second vessel deviates from the first vessel.
In practical applications, the second step may include:
and calculating the navigational speed of the second ship according to the distance change between the second ship and the first ship and the shooting interval of the image.
For example, if the distance between the second ship and the first ship is reduced by 0.1km and the image capturing interval is 5s, the speed of the second ship is 72 km/h.
Optionally, the third step may include:
determining a gear to which the second ship belongs according to the distance between the second ship and the first ship and the distance range of the plurality of gears;
and determining the course and the navigational speed of the second ship, and sequencing the second ships at the same gear.
Exemplarily, in a gear of a high-level threat, the distance between the second vessel and the first vessel is below 5 km; in the gear of the medium threat, the distance between the second ship and the first ship is between 5km and 10 km; in the low-level threat gear, the distance between the second ship and the first ship is between 10km and 15 km; in a non-threatening gear, the distance between the second vessel and the first vessel is above 15 km.
In a second vessel in the same gear, the second vessel facing the first vessel is arranged in front of the second vessel facing away from the first vessel.
And arranging the ships in the same gear and in the same direction according to the sequence of the navigational speed from big to small.
Step 207: an alarm is issued based on the hazard level of the second vessel.
Optionally, this step 207 may include:
and outputting the course, the navigational speed and the distance between the first ship and the plurality of second ships in turn according to the sequence of the danger levels from high to low to give an alarm.
According to the danger level, the alarm is given in sequence from high to low, so that the first ship can avoid the second ship with the largest collision probability at the first time, and the safety of the ship is effectively guaranteed.
In practical applications, the device for generating an alarm may be separately disposed in the bridge of the first vessel, or may be disposed in the alarm system of the first vessel.
The disclosed embodiment provides an all-weather ship identification system, which is suitable for implementing the all-weather ship identification method shown in fig. 2 or fig. 3. Fig. 8 is a schematic structural diagram of a system for identifying a ship all weather according to an embodiment of the present disclosure. Referring to fig. 8, the system includes:
the acquisition module 301 is configured to acquire a plurality of images around a first ship, where shooting times of the plurality of images are different;
the identification module 302 is configured to identify a ship in each image by using a deep learning algorithm to obtain a position of a second ship in each image;
a grading module 303, configured to determine a danger level of the second ship according to a position of the second ship in each image;
and an alarm module 304 for issuing an alarm according to the danger level of the second vessel.
The disclosed embodiments
Optionally, the obtaining module 301 may include:
the information acquisition submodule is used for acquiring environmental information around the first ship;
the level determining submodule is used for determining the visibility level around the first ship according to the environmental information;
the shooting control submodule is used for controlling the camera to continuously shoot a plurality of images around the first ship when the visibility level reaches a set standard; and when the visibility grade does not reach the set standard, controlling the laser holder to continuously shoot a plurality of images around the first ship.
Optionally, the identifying module 302 may include:
the convolution layer is used for extracting image characteristics from the image;
the area suggestion network is used for generating a candidate area according to the image characteristics;
the pooling layer is used for extracting region characteristics according to the image characteristics and the candidate regions;
and the classifier is used for determining the category of the candidate region according to the region characteristics to obtain the position of the second ship in the image.
Optionally, the grading module 303 may include:
the first determining submodule is used for determining the distance between the shooting time of the second ship in the image and the first ship according to the position of the second ship in the image;
the second determining submodule is used for determining the course and the navigational speed of the second ship according to the distance between the shooting time of the second ship in each image and the first ship;
and the third determining submodule is used for determining the danger level of the second ship according to the course, the navigational speed and the distance between the first ship and the second ship.
Alternatively, the alarm module 304 may be used to,
and outputting the course, the navigational speed and the distance between the first ship and the plurality of second ships in turn according to the sequence of the danger levels from high to low to give an alarm.
Optionally, the obtaining module 301 may also be configured to,
and acquiring a plurality of images of the positions of the marked ships.
Accordingly, the system may further include:
and the training module is used for training the model of the deep learning algorithm by adopting a plurality of images marked with the positions of the ships.
Optionally, the system may further include:
the storage module is used for storing a plurality of images around the first ship;
and the updating module is used for updating the model of the deep learning algorithm according to the plurality of images around the first ship.
It should be noted that: in the system for identifying a ship in all weather provided by the above embodiment, only the division of the above functional modules is taken as an example when identifying a ship in all weather, and in practical application, the above function distribution can be completed by different functional modules according to needs, that is, the internal structure of the system is divided into different functional modules to complete all or part of the above described functions. In addition, the system for identifying the ship in all weather provided by the embodiment and the method embodiment for identifying the ship in all weather belong to the same concept, and the specific implementation process is described in the method embodiment and is not described herein again.
The above-mentioned serial numbers of the embodiments of the present disclosure are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is intended to be exemplary only and not to limit the present disclosure, and any modification, equivalent replacement, or improvement made without departing from the spirit and scope of the present disclosure is to be considered as the same as the present disclosure.
Claims (10)
1. A method for all-weather identification of a ship, the method comprising:
acquiring a plurality of images around a first ship, wherein the shooting time of the plurality of images is different;
identifying ships in each image by adopting a deep learning algorithm to obtain the position of a second ship in each image;
determining the danger level of the second ship according to the position of the second ship in each image;
and sending an alarm according to the danger level of the second ship.
2. The method of claim 1, wherein said acquiring a plurality of images of the surroundings of the first vessel comprises:
acquiring environmental information around the first ship;
determining the visibility level around the first ship according to the environment information;
when the visibility grade reaches a set standard, controlling a camera to continuously shoot a plurality of images around the first ship;
and when the visibility grade does not reach the set standard, controlling the laser holder to continuously shoot a plurality of images around the first ship.
3. The method of claim 1 or 2, wherein said identifying a second vessel in each of said images using a deep learning algorithm comprises:
extracting image features from the image;
generating a candidate region according to the image characteristics;
extracting region features according to the image features and the candidate regions;
and determining the category of the candidate region according to the region feature to obtain the position of the second ship in the image.
4. The method of claim 1 or 2, wherein said determining a hazard level of the second vessel based on the position of the second vessel in each of the images comprises:
determining the distance between the shooting time of the second ship in the image and the first ship according to the position of the second ship in the image;
determining the course and the navigational speed of the second ship according to the shooting time of the second ship on each image and the distance between the first ship and the second ship;
and determining the danger level of the second ship according to the course, the navigational speed and the distance between the second ship and the first ship.
5. The method of claim 1 or 2, wherein said issuing an alert based on a hazard level of the second vessel comprises:
and outputting the course, the navigational speed and the distance between the second ship and the first ship in turn according to the sequence of the danger levels from high to low to give an alarm.
6. An all-weather identification system for a ship, said system comprising:
the acquisition module is used for acquiring a plurality of images around a first ship, and the shooting time of the plurality of images is different;
the identification module is used for identifying ships in each image by adopting a deep learning algorithm to obtain the position of a second ship in each image;
the grading module is used for determining the danger level of the second ship according to the position of the second ship in each image;
and the alarm module is used for sending out an alarm according to the danger level of the second ship.
7. The system of claim 6, wherein the acquisition module comprises:
the information acquisition submodule is used for acquiring environmental information around the first ship;
the level determining submodule is used for determining the visibility level around the first ship according to the environment information;
the shooting control submodule is used for controlling a camera to continuously shoot a plurality of images around the first ship when the visibility grade reaches a set standard; and when the visibility grade does not reach the set standard, controlling the laser holder to continuously shoot a plurality of images around the first ship.
8. The system of claim 6 or 7, wherein the identification module comprises:
a convolutional layer for extracting image features from the image;
the area suggestion network is used for generating a candidate area according to the image characteristics;
the pooling layer is used for extracting region features according to the image features and the candidate regions;
and the classifier is used for determining the category of the candidate region according to the region characteristics to obtain the position of the second ship in the image.
9. The system of claim 6 or 7, wherein the grading module comprises:
the first determining submodule is used for determining the distance between the shooting time of the second ship in the image and the first ship according to the position of the second ship in the image;
the second determining submodule is used for determining the course and the speed of the second ship according to the distance between the shooting time of the second ship in each image and the first ship;
and the third determining submodule is used for determining the danger level of the second ship according to the course, the navigational speed and the distance between the second ship and the first ship.
10. The system of claim 6 or 7, wherein the alarm module is configured to,
and outputting the course, the navigational speed and the distance between the second ship and the first ship in turn according to the sequence of the danger levels from high to low to give an alarm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010001169.4A CN111241948B (en) | 2020-01-02 | 2020-01-02 | Method and system for all-weather ship identification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010001169.4A CN111241948B (en) | 2020-01-02 | 2020-01-02 | Method and system for all-weather ship identification |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111241948A true CN111241948A (en) | 2020-06-05 |
CN111241948B CN111241948B (en) | 2023-10-31 |
Family
ID=70865848
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010001169.4A Active CN111241948B (en) | 2020-01-02 | 2020-01-02 | Method and system for all-weather ship identification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111241948B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111846127A (en) * | 2020-06-30 | 2020-10-30 | 中海油能源发展股份有限公司 | Image recognition monitoring system for preventing ship touch in offshore oil tanker export operation |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003178131A (en) * | 2001-12-11 | 2003-06-27 | Tadashi Goino | Risk control method and system |
JP2005353032A (en) * | 2004-05-12 | 2005-12-22 | National Maritime Research Institute | Ship navigation support apparatus |
CN101655562A (en) * | 2009-09-11 | 2010-02-24 | 深圳市高伦技术有限公司 | Anti-collision alarming radar system and device for ships |
KR20110037069A (en) * | 2009-10-05 | 2011-04-13 | 한국해양대학교 산학협력단 | Apparatus for displaying collision risk of vessel and method for displaying collision risk of vessel |
CN102039992A (en) * | 2009-10-14 | 2011-05-04 | 古野电气株式会社 | Navigation assisting device |
CN102147981A (en) * | 2010-12-20 | 2011-08-10 | 成都天奥信息科技有限公司 | Method for warning of warning region of shipborne automatic identification system |
KR20120033853A (en) * | 2010-09-30 | 2012-04-09 | 창원대학교 산학협력단 | Ship collision avoidance and recognition system |
KR101193687B1 (en) * | 2011-04-18 | 2012-10-22 | 창원대학교 산학협력단 | Sailing control system for avoiding ship collision |
KR101378859B1 (en) * | 2013-02-01 | 2014-03-31 | 삼성중공업 주식회사 | Apparatus and method for discriminating dangerous ship and pirate ship eradication system using it |
CN104050329A (en) * | 2014-06-25 | 2014-09-17 | 哈尔滨工程大学 | Method for detecting degree of risk of ship collision |
CN104535066A (en) * | 2014-12-19 | 2015-04-22 | 大连海事大学 | Marine target and electronic chart superposition method and system in on-board infrared video image |
KR20150139323A (en) * | 2014-06-03 | 2015-12-11 | 목포대학교산학협력단 | Method for preventing the ship collision using both satellite communications and ad-hoc communications |
KR101805564B1 (en) * | 2017-06-12 | 2018-01-18 | (주)지인테크 | Alarm system for prevent ship collision and method thereby |
CN107972662A (en) * | 2017-10-16 | 2018-05-01 | 华南理工大学 | To anti-collision warning method before a kind of vehicle based on deep learning |
CN108550281A (en) * | 2018-04-13 | 2018-09-18 | 武汉理工大学 | A kind of the ship DAS (Driver Assistant System) and method of view-based access control model AR |
CN108557030A (en) * | 2018-03-16 | 2018-09-21 | 汝州华超新能源科技有限公司 | A kind of ship sea operation monitoring method and monitoring system |
CN109697892A (en) * | 2019-02-22 | 2019-04-30 | 湖北大学 | A kind of ship collision risk intelligent early-warning method of space-time perception |
CN110441017A (en) * | 2019-07-19 | 2019-11-12 | 武汉理工大学 | A kind of Collision Accidents of Ships pilot system and test method |
-
2020
- 2020-01-02 CN CN202010001169.4A patent/CN111241948B/en active Active
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003178131A (en) * | 2001-12-11 | 2003-06-27 | Tadashi Goino | Risk control method and system |
JP2005353032A (en) * | 2004-05-12 | 2005-12-22 | National Maritime Research Institute | Ship navigation support apparatus |
CN101655562A (en) * | 2009-09-11 | 2010-02-24 | 深圳市高伦技术有限公司 | Anti-collision alarming radar system and device for ships |
KR20110037069A (en) * | 2009-10-05 | 2011-04-13 | 한국해양대학교 산학협력단 | Apparatus for displaying collision risk of vessel and method for displaying collision risk of vessel |
CN102039992A (en) * | 2009-10-14 | 2011-05-04 | 古野电气株式会社 | Navigation assisting device |
KR20120033853A (en) * | 2010-09-30 | 2012-04-09 | 창원대학교 산학협력단 | Ship collision avoidance and recognition system |
CN102147981A (en) * | 2010-12-20 | 2011-08-10 | 成都天奥信息科技有限公司 | Method for warning of warning region of shipborne automatic identification system |
KR101193687B1 (en) * | 2011-04-18 | 2012-10-22 | 창원대학교 산학협력단 | Sailing control system for avoiding ship collision |
KR101378859B1 (en) * | 2013-02-01 | 2014-03-31 | 삼성중공업 주식회사 | Apparatus and method for discriminating dangerous ship and pirate ship eradication system using it |
KR20150139323A (en) * | 2014-06-03 | 2015-12-11 | 목포대학교산학협력단 | Method for preventing the ship collision using both satellite communications and ad-hoc communications |
CN104050329A (en) * | 2014-06-25 | 2014-09-17 | 哈尔滨工程大学 | Method for detecting degree of risk of ship collision |
CN104535066A (en) * | 2014-12-19 | 2015-04-22 | 大连海事大学 | Marine target and electronic chart superposition method and system in on-board infrared video image |
KR101805564B1 (en) * | 2017-06-12 | 2018-01-18 | (주)지인테크 | Alarm system for prevent ship collision and method thereby |
CN107972662A (en) * | 2017-10-16 | 2018-05-01 | 华南理工大学 | To anti-collision warning method before a kind of vehicle based on deep learning |
CN108557030A (en) * | 2018-03-16 | 2018-09-21 | 汝州华超新能源科技有限公司 | A kind of ship sea operation monitoring method and monitoring system |
CN108550281A (en) * | 2018-04-13 | 2018-09-18 | 武汉理工大学 | A kind of the ship DAS (Driver Assistant System) and method of view-based access control model AR |
CN109697892A (en) * | 2019-02-22 | 2019-04-30 | 湖北大学 | A kind of ship collision risk intelligent early-warning method of space-time perception |
CN110441017A (en) * | 2019-07-19 | 2019-11-12 | 武汉理工大学 | A kind of Collision Accidents of Ships pilot system and test method |
Non-Patent Citations (1)
Title |
---|
王贵槐;谢朔;初秀民;洛天骄;: "基于深度学习的水面无人船前方船只图像识别方法", 船舶工程, no. 04 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111846127A (en) * | 2020-06-30 | 2020-10-30 | 中海油能源发展股份有限公司 | Image recognition monitoring system for preventing ship touch in offshore oil tanker export operation |
Also Published As
Publication number | Publication date |
---|---|
CN111241948B (en) | 2023-10-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102604969B1 (en) | Autonomous navigation method using image segmentation | |
US20200217950A1 (en) | Resolution of elevation ambiguity in one-dimensional radar processing | |
CN109409283B (en) | Method, system and storage medium for tracking and monitoring sea surface ship | |
US11783568B2 (en) | Object classification using extra-regional context | |
CN110531376B (en) | Obstacle detection and tracking method for port unmanned vehicle | |
US20220024549A1 (en) | System and method for measuring the distance to an object in water | |
KR102466804B1 (en) | Autonomous navigation method using image segmentation | |
KR20210090572A (en) | Device and method for monitoring a berthing | |
US20230038494A1 (en) | Administrative server in ship navigation assistance system, ship navigation assistance method, and ship navigation assistance program | |
Ruiz et al. | A short-range ship navigation system based on ladar imaging and target tracking for improved safety and efficiency | |
US10895802B1 (en) | Deep learning and intelligent sensing systems for port operations | |
CN113050121A (en) | Ship navigation system and ship navigation method | |
CN112464994A (en) | Boat stern wave identification and removal method based on PointNet network | |
CN115620559A (en) | Ship safety management method, system and equipment based on intelligent sensing | |
CN110865394A (en) | Target classification system based on laser radar data and data processing method thereof | |
CN111241948B (en) | Method and system for all-weather ship identification | |
CN115267827A (en) | Laser radar harbor area obstacle sensing method based on height density screening | |
CN114120275A (en) | Automatic driving obstacle detection and recognition method and device, electronic equipment and storage medium | |
KR20220038265A (en) | Distance measurement method and distance measurement device using the same | |
KR102249156B1 (en) | Sailing assistance device using augmented reality image | |
CN110895680A (en) | Unmanned ship water surface target detection method based on regional suggestion network | |
US20240193904A1 (en) | Method and system for determining a region of water clearance of a water surface | |
WO2023223659A1 (en) | Recognition system, recognition apparatus, recognition method, recognition program, and recognition data generation method | |
EP4173942A1 (en) | Navigation assistance device using augmented reality and 3d images | |
US20220189216A1 (en) | Safe driving level evaluation device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |