CN114863373A - Offshore unmanned platform monitoring method and offshore unmanned platform - Google Patents

Offshore unmanned platform monitoring method and offshore unmanned platform Download PDF

Info

Publication number
CN114863373A
CN114863373A CN202210411423.7A CN202210411423A CN114863373A CN 114863373 A CN114863373 A CN 114863373A CN 202210411423 A CN202210411423 A CN 202210411423A CN 114863373 A CN114863373 A CN 114863373A
Authority
CN
China
Prior art keywords
camera
image
fog
sea
ship
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210411423.7A
Other languages
Chinese (zh)
Other versions
CN114863373B (en
Inventor
黄志成
董超
郑兵
田联房
陈焱琨
陈凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Sea Survey Technology Center State Oceanic Administration (south China Sea Marine Buoy Center)
South China University of Technology SCUT
Original Assignee
South China Sea Survey Technology Center State Oceanic Administration (south China Sea Marine Buoy Center)
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Sea Survey Technology Center State Oceanic Administration (south China Sea Marine Buoy Center), South China University of Technology SCUT filed Critical South China Sea Survey Technology Center State Oceanic Administration (south China Sea Marine Buoy Center)
Priority to CN202210411423.7A priority Critical patent/CN114863373B/en
Publication of CN114863373A publication Critical patent/CN114863373A/en
Application granted granted Critical
Publication of CN114863373B publication Critical patent/CN114863373B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an offshore unmanned platform monitoring method and an offshore unmanned platform, wherein the method comprises the following steps: and carrying out defogging treatment on the image collected by the camera containing the fog according to the defogging model, and carrying out pretreatment on the image collected by the camera. Carrying out target detection on the image acquired by the camera by using YOLOV5 to obtain a ship boundary frame; detecting a sea-sky line in the image collected by the camera; and if the ship boundary frame is positioned in a warning area formed by the distance between the sea antenna and the warning line, and the area occupied by the ship boundary frame is larger than a preset area threshold value, sending an illegal ship intrusion signal. By adopting the embodiment of the invention, intrusion judgment is carried out by utilizing a structure of a YOLOV5 detection frame and sea-sky-line detection, and a series of preprocessing means are added to improve the detection rate in complex weather.

Description

Offshore unmanned platform monitoring method and offshore unmanned platform
Technical Field
The invention relates to the field of intelligent monitoring, in particular to an offshore unmanned platform monitoring method and an offshore unmanned platform.
Background
The marine buoy is deployed in all sea areas in China, is an important offshore unmanned monitoring platform, is in an unmanned supervision state for a long time, and is provided with a detection instrument with a high value, so that lawless persons can often damage objects on the buoy and steal the objects at the same time. These monitoring platforms include many hardware and sensors, for example, AIS, distance measuring sensor, camera, etc. need to be carried on, and each sensor needs to be subjected to information integration processing, and the platform overall structure is relatively complicated and bulky.
Disclosure of Invention
The embodiment of the invention provides an offshore unmanned platform monitoring method and an offshore unmanned platform, which utilize a YOLOV5 detection frame and a sea-sky-line detection structure to judge invasion and add a series of preprocessing means to improve the detection rate in complex weather.
A first aspect of an embodiment of the present application provides a method for monitoring an offshore unmanned platform, including:
converting an image collected by a camera into a gray image, and counting the number of pixel points of each gray intensity;
counting the gray intensity of the pixel points which is larger than a preset pixel threshold value, and recording the corresponding gray intensity as effective gray intensity;
if the number of the effective gray intensities is larger than a preset fog threshold value, judging that the image collected by the camera contains fog, and dividing fog levels according to the number of the effective gray intensities; adjusting defogging parameters in a defogging model according to the fog level, defogging the image acquired by the camera according to the defogging model, and preprocessing the image acquired by the camera;
if the number of the effective gray intensities is smaller than or equal to a preset fog threshold value, judging that the image collected by the camera does not contain fog, and preprocessing the image collected by the camera;
carrying out target detection on the image acquired by the camera by using YOLOV5 to obtain a ship boundary frame;
detecting a sea-sky line in the image collected by the camera;
and if the ship boundary frame is positioned in a warning area formed by the distance between the sea antenna and the warning line, and the occupied area of the ship boundary frame is greater than a preset area threshold value, sending an illegal intrusion signal of the ship.
In one possible implementation form of the first aspect, the defogging model is a model based on an atmospheric scattering model, and includes scene radiance, global atmospheric light, and medium transmission map.
In one possible implementation form of the first aspect, the preprocessing includes white balance adjustment, gamma correction, hue adjustment, contrast adjustment, and sharpening setting.
In a possible implementation manner of the first aspect, the detecting a sea-sky line in the image acquired by the camera specifically includes:
carrying out image filtering and region segmentation on the image acquired by the camera;
extracting edges by using a canny operator, and aggregating sea antennas by using a k-means clustering algorithm;
and carrying out Hough line detection and least square fitting on the polymeric sea antenna to obtain the sea antenna.
In a possible implementation manner of the first aspect, an area occupied by the ship bounding box is greater than a preset area threshold, specifically:
the ratio of the ship boundary frame pixel points occupying the image pixels collected by the camera is greater than a preset ratio threshold value.
A second aspect of an embodiment of the present application provides an offshore unmanned monitoring platform, including:
the preprocessing module is used for converting the image acquired by the camera into a gray image and counting the number of pixel points of each gray intensity; counting the gray intensity of the pixel points which is larger than a preset pixel threshold value, and recording the corresponding gray intensity as effective gray intensity; if the number of the effective gray intensities is larger than a preset fog threshold value, judging that the image collected by the camera contains fog, and dividing fog levels according to the number of the effective gray intensities; adjusting defogging parameters in a defogging model according to the fog level, defogging the image acquired by the camera according to the defogging model, and preprocessing the image acquired by the camera; if the number of the effective gray intensities is smaller than or equal to a preset fog threshold value, judging that the image collected by the camera does not contain fog, and preprocessing the image collected by the camera;
the target detection module is used for carrying out target detection on the image acquired by the camera by using the YOLOV5 to obtain a ship boundary frame;
and the early warning module is used for sending an illegal invasion signal of the ship if the ship boundary frame is positioned in a warning area formed by the distance of the sea antenna and a warning line, and the occupied area of the ship boundary frame is larger than a preset area threshold value.
In one possible implementation of the second aspect, the defogging model is a model based on an atmospheric scattering model, including scene radiance, global atmospheric light, and medium transmission map.
In one possible implementation of the second aspect, the pre-processing includes white balance adjustment, gamma correction, hue adjustment, contrast adjustment, and sharpening setting.
In a possible implementation manner of the second aspect, the detecting a sea-sky line in the image captured by the camera specifically includes:
carrying out image filtering and region segmentation on the image acquired by the camera;
extracting edges by using a canny operator, and aggregating sea antennas by using a k-means clustering algorithm;
and carrying out Hough line detection and least square fitting on the polymeric sea antenna to obtain the sea antenna.
In a possible implementation manner of the second aspect, an area occupied by the ship bounding box is greater than a preset area threshold, specifically:
the ratio of the ship boundary frame pixel points occupying the image pixels collected by the camera is greater than a preset ratio threshold value.
Compared with the prior art, the embodiment of the invention provides the offshore unmanned platform monitoring method and the offshore unmanned platform, which only depend on the camera installed on the monitoring platform to acquire the image acquired by the camera and the edge embedded equipment supporting the deep learning algorithm to process and analyze the image acquired by the camera, and do not need to add other additional equipment. The method comprises the steps of converting an image collected by a camera into a gray image, automatically distinguishing images with fog and images without fog, and judging the fog level, so that fog and grading are automatically distinguished in a real environment, fine adjustment is carried out on parameters of a defogging model, the detection accuracy is improved, intrusion judgment is carried out by using a structure detected by a YOLOV5 detection frame and a sea antenna, and deeper information is obtained from the image collected by the camera.
Drawings
Fig. 1 is a schematic flow chart of a method for monitoring an offshore unmanned platform according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of fog determination and pre-processing of images captured by a camera according to one embodiment of the present invention;
FIG. 3 is a schematic diagram of a sea-sky line detection of images captured by a camera according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a camera capturing images after sea-sky and target detection in accordance with an embodiment of the present invention;
FIG. 5 is a schematic diagram of a camera capturing images for warning zone division according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a method for monitoring an offshore unmanned platform, including:
and S10, converting the image collected by the camera into a gray image, and counting the number of pixel points of each gray intensity.
And S11, counting the gray intensity of the pixel points which are larger than the preset pixel threshold value, and recording the corresponding gray intensity as the effective gray intensity.
S12, if the number of the effective gray intensities is larger than a preset fog threshold value, judging that the image collected by the camera contains fog, and dividing fog levels according to the number of the effective gray intensities; and adjusting defogging parameters in a defogging model according to the fog level, defogging the image acquired by the camera according to the defogging model, and preprocessing the image acquired by the camera.
And S13, if the number of the effective gray intensities is less than or equal to a preset fog threshold value, judging that the image collected by the camera does not contain fog, and preprocessing the image collected by the camera.
And S14, carrying out target detection on the image collected by the camera by using a YOLOV5 to obtain a ship boundary frame.
And S15, detecting the sea-sky line in the image collected by the camera.
S16, if the ship boundary frame is located in a warning area formed by the distance of the sea antenna and a warning line, and the area occupied by the ship boundary frame is larger than a preset area threshold value, sending an illegal invasion signal of the ship.
S10-S13 are the processes of preprocessing images collected by a camera, the sea can face complex weather, the frequently encountered condition is fog, and the detection effect can be greatly reduced in a fog environment, so that the images can be subjected to sharpening processing such as deblurring and defogging by using some image preprocessing means. In the embodiment, the fog and fog-free analysis and the fog level judgment are carried out on the image according to the number statistics of the gray values in the gray histogram. Compared with a normal image, the low gray level in the histogram of the foggy image is sharper, the foggy image with the histogram level of 0-255 is more concentrated in the low gray level, and the histogram is sharper. After the pictures are input, the pictures are firstly subjected to fog judgment, then enter a preprocessing stage, and then are respectively sent to a sea-sky-line detection detector and a Yolov5 detector, wherein the Yolov5 is used for detecting ships, the sea-sky-line detection is used for detecting sea-sky-lines, then the results of the two detectors are output to one picture, and then early warning judgment is carried out.
In this embodiment, an image acquired by a camera is first changed into a grayscale image, and a histogram of the grayscale image is obtained. The abscissa of the histogram is the gray level intensity [0,255], and the ordinate is the number of pixels at the current gray level.
H(k)=n k (1)
k represents the abscissa of the histogram, and k has a value of [0,255%],n k The histogram is expressed by the number of pixels when the gray scale intensity level is k. After statistics, a horizontal line with the height of T is set as a threshold value, and statistics is carried out。
T=na (2)
n is the total number of pixel points in the original image, and a is a percentage parameter, and is set according to the requirement. Statistical histogram of H (k) below horizontal line T
Figure BDA0003604220100000061
Figure BDA0003604220100000062
Setting a threshold T 1 And (3) judging fog:
Figure BDA0003604220100000063
when L is 0, no fog is present; when L is 1, fogging is indicated; after judging the fog, according to H 1 Is subjected to mist grade division.
Illustratively, the defogging model is a model based on an atmospheric scattering model, including scene radiance, global atmospheric light, and medium transmission map.
The following fine tuning of the parameters of the defogging model is required:
based on the atmospheric scattering model, the fog image is obtained according to the following formula:
I(x)=J(x)t(x)+A(1-t(x)) (6)
here, i (x) is a foggy image, j (x) represents an original image, a is global atmospheric light, t (x) is a medium transmission diagram, and is defined as:
t(x)=e d(x) (7)
where β represents the scattering coefficient of the atmosphere and d (x) is the scene depth.
From equation (6), an approximate solution for t (x) can be derived as follows:
Figure BDA0003604220100000071
introducing a parameter ω to control the degree of defogging:
Figure BDA0003604220100000072
it should be noted that the parameters of the entire defogging model in the preprocessing are trained by CNN (convolutional neural network), so the preprocessing must be differentiable, and the parameter ω can be optimized by back propagation, so that the defogging filter is more favorable for detecting the fog image. According to the preprocessing process, whether the image collected by the camera is foggy or not can be judged, and the fog level can also be judged, so that a parameter gamma is introduced to carry out fine adjustment:
Figure BDA0003604220100000073
if the picture is determined to be a haze-free picture, let γ be 0, where t (x, ω, γ) is 1, and substitute into equation (6), i (x) j (x) can be obtained, and at this time, the picture is not subjected to the haze removal processing.
When the picture is judged to be a foggy picture, limiting the parameter gamma omega according to the fog level, assuming that the picture is divided into three levels of I-level light fog, II-level medium fog and III-level heavy fog, wherein the value range of t (x, omega, gamma) in the I-level light fog is (0.66, 0.99), the value range of t (x, omega, gamma) in the II-level medium fog is (0.33, 0.66) and the value range of t (x, omega, gamma) in the III-level heavy fog is (0, 0.33), when the fog distinguishing level is more detailed, the value range is smaller, and under the condition that omega is known, the parameter gamma is adjusted to enable the gamma omega to accord with the fog value range, so that fine adjustment is achieved.
Illustratively, the preprocessing includes white balance adjustment, gamma correction, hue adjustment, contrast adjustment, and sharpening setting.
The algorithms used in the preprocessing process include defogging, image white balance, gamma conversion, tone adjustment, contrast stretching and sharpening. These algorithms are differentiable because their parameters are automatically set according to the CNN network. Referring to fig. 2, the preprocessing process needs to be performed after fog determination is performed on the image acquired by the camera.
In the target detection, YOLOV5 was used as the detector. The YOLOV5 is further improved on the basis of the YOLOV4 algorithm, and although the YOLOV4 algorithm is an unofficially recognized version, the YOLOV5 has the advantage of being more suitable for the requirements of the industry and provides a powerful contribution to industrial deployment of the YOLO series of algorithms.
Exemplarily, referring to fig. 3, the detecting a sea-sky-line in the image captured by the camera specifically includes:
carrying out image filtering and region segmentation on the image acquired by the camera;
extracting edges by using a canny operator, and aggregating sea antennas by using a k-means clustering algorithm;
and carrying out Hough line detection and least square fitting on the polymeric sea antenna to obtain the sea antenna.
The sea-sky-line detection is mainly matched with target detection, and judgment conditions are provided for subsequent early warning. The method adopts the traditional image detection method, and the sea-sky-line can be obtained after the processes of filtering, segmentation, aggregation, fitting and the like. The embodiment can improve the accuracy of sea-sky line detection in the weather environment facing fog and low light.
Illustratively, the area occupied by the ship bounding box is greater than a preset area threshold, specifically:
the ratio of the ship boundary frame pixel points occupying the image pixels collected by the camera is greater than a preset ratio threshold value.
In general, "30%" may be selected as the preset duty threshold in practical applications.
The present embodiment jointly determines the relative distance of the vessel by detecting the vessel and the sea-sky-line detection from the YOLOV5 target. The results of the test shown in fig. 4 are expected after the YOLOV5 test and the heyna test.
Since the specific coordinate position of the ship cannot be obtained from the image, the method considered and used herein is to judge the ship by using a relative reference position, the only reference line which can be obtained on the sea surface is the sea antenna, therefore, sea antenna detection is introduced into the system, the sea antenna is taken as a distance reference line, an early warning area is set according to the reference line, and meanwhile, the buoy shakes when floating on the sea surface, so that the camera swings due to floating and shaking of the buoy, the ship which is originally far away appears in the early warning area, and the detection frame size of the YOLOV5 is introduced to assist in judging the relative distance of the ship. The expected results are shown in figure 5.
Compared with the prior art, the embodiment of the invention provides the monitoring method for the offshore unmanned platform, which is only dependent on the monitoring platform provided with the camera to acquire the image acquired by the camera and the edge embedded equipment supporting the deep learning algorithm to process and analyze the image acquired by the camera, and does not need to add other additional equipment. The method comprises the steps of converting an image collected by a camera into a gray image, automatically distinguishing images with fog and images without fog, and judging the fog level, so that fog and grading are automatically distinguished in a real environment, fine adjustment is carried out on parameters of a defogging model, the detection accuracy is improved, intrusion judgment is carried out by using a structure detected by a YOLOV5 detection frame and a sea antenna, and deeper information is obtained from the image collected by the camera.
A second aspect of an embodiment of the present application provides an offshore unmanned monitoring platform, including: the system comprises a preprocessing module, a target detection module and an early warning module.
The preprocessing module is used for converting the image collected by the camera into a gray image and counting the number of pixel points of each gray intensity; counting the gray intensity of the pixel points which is larger than a preset pixel threshold value, and recording the corresponding gray intensity as effective gray intensity; if the number of the effective gray intensities is larger than a preset fog threshold value, judging that the image collected by the camera contains fog, and dividing fog levels according to the number of the effective gray intensities; adjusting defogging parameters in a defogging model according to the fog level, defogging the image acquired by the camera according to the defogging model, and preprocessing the image acquired by the camera; if the number of the effective gray intensities is smaller than or equal to a preset fog threshold value, judging that the image collected by the camera does not contain fog, and preprocessing the image collected by the camera;
and the target detection module is used for carrying out target detection on the image acquired by the camera by using the YOLOV5 to obtain a ship boundary frame.
And the target detection module is used for carrying out target detection on the image acquired by the camera by using the YOLOV5 to obtain a ship boundary frame.
And the early warning module is used for sending an illegal invasion signal of the ship if the ship boundary frame is positioned in a warning area formed by the distance of the sea antenna and a warning line, and the occupied area of the ship boundary frame is larger than a preset area threshold value.
Illustratively, the defogging model is a model based on an atmospheric scattering model, including scene radiance, global atmospheric light, and medium transmission map.
Illustratively, the preprocessing includes white balance adjustment, gamma correction, hue adjustment, contrast adjustment, and sharpening setting.
Illustratively, the detecting a sea-sky line in the image captured by the camera specifically includes:
carrying out image filtering and region segmentation on the image acquired by the camera;
extracting edges by using a canny operator, and aggregating sea antennas by using a k-means clustering algorithm;
and carrying out Hough line detection and least square fitting on the polymeric sea antenna to obtain the sea antenna.
Illustratively, the area occupied by the ship bounding box is greater than a preset area threshold, specifically:
the ratio of the ship boundary frame pixel points occupying the image pixels collected by the camera is greater than a preset ratio threshold value.
Compared with the prior art, the embodiment of the invention provides the offshore unmanned monitoring platform, which is only dependent on the monitoring platform provided with the camera to acquire the image acquired by the camera and the edge embedded equipment supporting the deep learning algorithm to process and analyze the image acquired by the camera, and does not need to add other additional equipment. The method comprises the steps of converting an image collected by a camera into a gray image, automatically distinguishing images with fog and images without fog, and judging the fog level, so that fog and grading are automatically distinguished in a real environment, fine adjustment is carried out on parameters of a defogging model, the detection accuracy is improved, intrusion judgment is carried out by using a structure detected by a YOLOV5 detection frame and a sea antenna, and deeper information is obtained from the image collected by the camera.
It is clear to those skilled in the art that for the convenience and simplicity of description, the specific working procedures of the platform described above can be referred to the corresponding procedures in the foregoing method embodiments, which are not reiterated herein.
The whole monitoring platform completely utilizes the images acquired by the camera to process and judge, and the used hardware equipment comprises a camera and embedded equipment (provided with a preprocessing module, a target detection module and an early warning module) supporting a deep learning algorithm, so that additional AIS equipment, a distance measuring sensor and the like are not needed, and the cost can be greatly reduced. Aiming at target detection under the condition of complex weather, compared with the prior art, the method can automatically distinguish fog images from fog images and judge the fog level, thereby automatically distinguishing fog and grading in the real environment, more finely adjusting the parameters of the defogging module and improving the detection accuracy
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (10)

1. An offshore unmanned platform monitoring method, comprising:
converting an image collected by a camera into a gray image, and counting the number of pixel points of each gray intensity;
counting the gray intensity of the pixel points which is larger than a preset pixel threshold value, and recording the corresponding gray intensity as effective gray intensity;
if the number of the effective gray intensities is larger than a preset fog threshold value, judging that the image collected by the camera contains fog, and dividing fog levels according to the number of the effective gray intensities; adjusting defogging parameters in a defogging model according to the fog level, defogging the image acquired by the camera according to the defogging model, and preprocessing the image acquired by the camera;
if the number of the effective gray intensities is smaller than or equal to a preset fog threshold value, judging that the image collected by the camera does not contain fog, and preprocessing the image collected by the camera;
carrying out target detection on the image acquired by the camera by using YOLOV5 to obtain a ship boundary frame;
detecting a sea-sky line in the image collected by the camera;
and if the ship boundary frame is positioned in a warning area formed by the distance between the sea antenna and the warning line, and the area occupied by the ship boundary frame is larger than a preset area threshold value, sending an illegal ship intrusion signal.
2. The offshore unmanned platform monitoring method of claim 1, wherein the defogging model is an atmospheric scattering model based model comprising scene radiance, global atmospheric light, and medium transmission map.
3. The offshore unmanned platform monitoring method of claim 1, wherein the pre-processing comprises white balance adjustment, gamma correction, hue adjustment, contrast adjustment and sharpening settings.
4. The offshore unmanned platform monitoring method of claim 1, wherein the detecting of the sea-sky line in the image captured by the camera specifically comprises:
carrying out image filtering and region segmentation on the image acquired by the camera;
extracting edges by using a canny operator, and aggregating sea antennas by using a k-means clustering algorithm;
and carrying out Hough line detection and least square fitting on the polymeric sea antenna to obtain the sea antenna.
5. The offshore unmanned platform monitoring method of claim 1, wherein an area occupied by the ship bounding box is greater than a preset area threshold, specifically:
and the ratio of the ship boundary frame pixel points to the image pixels acquired by the camera is greater than a preset ratio threshold value.
6. An offshore unmanned monitoring platform, comprising:
the preprocessing module is used for converting the image collected by the camera into a gray image and counting the number of pixel points of each gray intensity; counting the gray intensity of the pixel points which is larger than a preset pixel threshold value, and recording the corresponding gray intensity as effective gray intensity; if the number of the effective gray intensities is larger than a preset fog threshold value, judging that the image collected by the camera contains fog, and dividing fog levels according to the number of the effective gray intensities; adjusting defogging parameters in a defogging model according to the fog level, defogging the image acquired by the camera according to the defogging model, and preprocessing the image acquired by the camera; if the number of the effective gray intensities is smaller than or equal to a preset fog threshold value, judging that the image collected by the camera does not contain fog, and preprocessing the image collected by the camera;
the target detection module is used for carrying out target detection on the image acquired by the camera by using the YOLOV5 to obtain a ship boundary frame;
the sea antenna module is used for detecting a sea antenna in the image acquired by the camera;
and the early warning module is used for sending an illegal invasion signal of the ship if the ship boundary frame is positioned in a warning area formed by the distance of the sea antenna and a warning line, and the occupied area of the ship boundary frame is larger than a preset area threshold value.
7. An offshore unmanned monitoring platform, as claimed in claim 6, wherein the defogging model is an atmospheric scattering model based model comprising scene radiance, global atmospheric light and medium transmission map.
8. The offshore unmanned monitoring platform of claim 6, wherein the pre-processing comprises white balance adjustment, gamma correction, hue adjustment, contrast adjustment, and sharpening settings.
9. The offshore unmanned surveillance platform of claim 6, wherein the detecting of the sea-sky-line in the image captured by the camera comprises:
carrying out image filtering and region segmentation on the image acquired by the camera;
extracting edges by using a canny operator, and aggregating sea antennas by using a k-means clustering algorithm;
and carrying out Hough line detection and least square fitting on the polymeric sea antenna to obtain the sea antenna.
10. The offshore unmanned monitoring platform of claim 6, wherein the area occupied by the vessel bounding box is greater than a predetermined area threshold, specifically:
the ratio of the ship boundary frame pixel points occupying the image pixels collected by the camera is greater than a preset ratio threshold value.
CN202210411423.7A 2022-04-19 2022-04-19 Marine unmanned platform monitoring method and marine unmanned platform Active CN114863373B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210411423.7A CN114863373B (en) 2022-04-19 2022-04-19 Marine unmanned platform monitoring method and marine unmanned platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210411423.7A CN114863373B (en) 2022-04-19 2022-04-19 Marine unmanned platform monitoring method and marine unmanned platform

Publications (2)

Publication Number Publication Date
CN114863373A true CN114863373A (en) 2022-08-05
CN114863373B CN114863373B (en) 2024-06-04

Family

ID=82632006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210411423.7A Active CN114863373B (en) 2022-04-19 2022-04-19 Marine unmanned platform monitoring method and marine unmanned platform

Country Status (1)

Country Link
CN (1) CN114863373B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116343144A (en) * 2023-05-24 2023-06-27 武汉纺织大学 Real-time target detection method integrating visual perception and self-adaptive defogging

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902972A (en) * 2014-03-21 2014-07-02 哈尔滨工程大学 Water surface moving platform visual system image analyzing and processing method
KR20160115130A (en) * 2015-03-26 2016-10-06 주식회사 네오카텍 Marine risk management system and marine risk management method using marine object distance measuring system with monocular camera
CN107369283A (en) * 2017-07-21 2017-11-21 国家***第海洋研究所 A kind of ocean anchor system buoy early warning system and method based on image recognition
CN108681691A (en) * 2018-04-09 2018-10-19 上海大学 A kind of marine ships and light boats rapid detection method based on unmanned water surface ship
CN108765458A (en) * 2018-04-16 2018-11-06 上海大学 High sea situation unmanned boat sea-surface target dimension self-adaption tracking based on correlation filtering

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902972A (en) * 2014-03-21 2014-07-02 哈尔滨工程大学 Water surface moving platform visual system image analyzing and processing method
KR20160115130A (en) * 2015-03-26 2016-10-06 주식회사 네오카텍 Marine risk management system and marine risk management method using marine object distance measuring system with monocular camera
CN107369283A (en) * 2017-07-21 2017-11-21 国家***第海洋研究所 A kind of ocean anchor system buoy early warning system and method based on image recognition
CN108681691A (en) * 2018-04-09 2018-10-19 上海大学 A kind of marine ships and light boats rapid detection method based on unmanned water surface ship
CN108765458A (en) * 2018-04-16 2018-11-06 上海大学 High sea situation unmanned boat sea-surface target dimension self-adaption tracking based on correlation filtering

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116343144A (en) * 2023-05-24 2023-06-27 武汉纺织大学 Real-time target detection method integrating visual perception and self-adaptive defogging
CN116343144B (en) * 2023-05-24 2023-08-11 武汉纺织大学 Real-time target detection method integrating visual perception and self-adaptive defogging

Also Published As

Publication number Publication date
CN114863373B (en) 2024-06-04

Similar Documents

Publication Publication Date Title
CN110414334B (en) Intelligent water quality identification method based on unmanned aerial vehicle inspection
CN108806334A (en) A kind of intelligent ship personal identification method based on image
CN104933680B (en) A kind of intelligent quick sea fog minimizing technology of unmanned boat vision system video
CN108229342B (en) Automatic sea surface ship target detection method
US20140314270A1 (en) Detection of floating objects in maritime video using a mobile camera
CN108198417B (en) A kind of road cruising inspection system based on unmanned plane
CN112381870B (en) Binocular vision-based ship identification and navigational speed measurement system and method
KR101986025B1 (en) Machine learning-based satellite sea fog detection apparatus and method
CN110060221B (en) Bridge vehicle detection method based on unmanned aerial vehicle aerial image
CN109063669B (en) Bridge area ship navigation situation analysis method and device based on image recognition
CN110866926A (en) Infrared remote sensing image rapid and fine sea-land segmentation method
CN111123251B (en) Target object detection method and device of radar
CN114863373B (en) Marine unmanned platform monitoring method and marine unmanned platform
CN114821358A (en) Optical remote sensing image marine ship target extraction and identification method
CN116152115A (en) Garbage image denoising processing method based on computer vision
CN107977608B (en) Method for extracting road area of highway video image
CN116311212B (en) Ship number identification method and device based on high-speed camera and in motion state
Heyn et al. A system for automated vision-based sea-ice concentration detection and floe-size distribution indication from an icebreaker
CN106991682B (en) Automatic port cargo ship extraction method and device
Anagnostopoulos et al. Using sliding concentric windows for license plate segmentation and processing
CN111626180B (en) Lane line detection method and device based on polarization imaging
US20100296743A1 (en) Image processing apparatus, image processing method and program
CN109886120B (en) Zebra crossing detection method and system
CN109886133A (en) A kind of ship detection method and system based on remote sensing image
CN109886899A (en) A kind of sea horizon image detecting method by dynamic disturbance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant