CN113936029A - Video-based illegal fishing automatic detection method and system - Google Patents

Video-based illegal fishing automatic detection method and system Download PDF

Info

Publication number
CN113936029A
CN113936029A CN202111328940.XA CN202111328940A CN113936029A CN 113936029 A CN113936029 A CN 113936029A CN 202111328940 A CN202111328940 A CN 202111328940A CN 113936029 A CN113936029 A CN 113936029A
Authority
CN
China
Prior art keywords
target
suspicious
detection
picture
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111328940.XA
Other languages
Chinese (zh)
Inventor
林德银
邓宏平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yingjue Technology Co ltd
Original Assignee
Shanghai Yingjue Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yingjue Technology Co ltd filed Critical Shanghai Yingjue Technology Co ltd
Priority to CN202111328940.XA priority Critical patent/CN113936029A/en
Publication of CN113936029A publication Critical patent/CN113936029A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention provides a video-based illegal fishing automatic detection method and a video-based illegal fishing automatic detection system, wherein the method comprises the following steps: step S1: intercepting and marking an image containing a suspicious target from a monitoring video; step S2: preprocessing the picture marked with the suspicious target; step S3: training a target detection network by using the preprocessed picture; step S4: extracting continuous multiframes to perform background modeling to obtain a background image; step S5: comparing the current picture obtained based on real-time monitoring with a background picture to obtain foreground pixels; step S6: filtering connected domains which meet preset requirements in the foreground image, taking the remaining connected domains as moving targets, and acquiring the positions of the moving targets; step S7: jointly analyzing the position of the moving target and the position of the target detected based on the trained target detection network, and filtering out the false alarm target; and acquiring multi-frame pictures of the suspicious target based on the camera, repeatedly triggering the steps S5 to S7, and determining the detection result.

Description

Video-based illegal fishing automatic detection method and system
Technical Field
The invention relates to the technical field of ship target detection, in particular to a video-based illegal fishing automatic detection method and system.
Background
With the increasingly decreasing natural fishery resources in China, the protection of fish resources in Yangtze river, Zhujiang river, Dongting lake, Yangtze lake, coastal areas and the like becomes very urgent. Particularly, the Changjiang river protection advocated by the nation in recent two years is a fish resource protection hot tide which is promoted in the nation. The attack on the illegal fishing is the core of fish resource protection. Cameras are arranged at the river, the lake and the sea, and targets and behaviors in the video are analyzed, so that whether the behavior belongs to fishery stealing behavior or not is judged, and the illegal fishing capturing method is promising.
Patent document CN109918968A (application date: 201711319225.3) discloses a ship target detection method, which includes performing sea-sky-line extraction based on gradient on an original visible light image, obtaining the position of a sea-sky-line in the image, and segmenting a sea-sky-line region image containing a target based on the position of the sea-sky-line in the image; according to the obtained sea-sky-line area image, performing significance detection by adopting an improved multi-scale phase spectrum algorithm to obtain a significance image; and carrying out OTSU threshold segmentation on the saliency image, and obtaining the segmented image as a ship target.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a video-based illegal fishing automatic detection method and system.
The invention provides a video-based illegal fishing automatic detection method, which comprises the following steps:
step S1: intercepting an image containing a suspicious target from an actually shot monitoring video, and marking the position of the suspicious target in the image in a rectangular frame form;
step S2: preprocessing the picture marked with the suspicious target to obtain a preprocessed picture;
step S3: training a target detection network by using the preprocessed picture marked with the suspicious target to obtain the trained target detection network;
step S4: extracting continuous multiframes based on the monitoring video, and performing background modeling by using an average background method to obtain a background image;
step S5: monitoring an acquired current picture in real time based on a camera, comparing the acquired current picture with a background picture pixel by pixel, and when the difference meets a preset condition, taking the pixel of which the difference meets the preset condition as a foreground pixel;
step S6: filtering connected domains which meet preset requirements in the foreground image, taking the remaining connected domains as moving targets, and acquiring the positions of the moving targets;
step S7: jointly analyzing the position of the obtained moving target and the position of a target detected based on the trained target detection network, and filtering the target if the detection result of the trained target detection network does not belong to the suspicious target; when the detection result of the trained target detection network belongs to a suspicious target, acquiring multi-frame pictures based on the camera, repeatedly triggering the steps S5 to S7 until the number of times of repetition reaches a preset value, and determining the detection result;
step S8: and tracking the detection target, analyzing the tracked track, and determining that the detection target track meets the fishing characteristics and belongs to the fishing behavior when the detection target track meets the preset requirement.
Preferably, the step S1 adopts: carrying out actual shooting through a visible light camera and an infrared camera to obtain a monitoring video of the actual shooting;
under the daytime condition, carrying out actual shooting through a visible light camera, obtaining a monitoring video of the actual shooting, and obtaining a shot suspicious target;
under the night condition, the infrared camera is used for actually shooting, the actually shot monitoring video is obtained, and the shot suspicious target is obtained.
Preferably, the step S2 adopts:
step S2.1: setting monitoring key points, wherein each key point corresponds to an ROI (region of interest) image which is a target detection area;
step S2.2: intercepting a current video picture at a current key point, and drawing an ROI image in a polygonal mode by using an annotation tool;
step S2.3: when the lens of the camera is zoomed out or zoomed in, the corresponding ROI image is amplified or reduced according to the lens parameters;
step S2.4: when the camera carries out target tracking and causes the azimuth angle of the holder to deviate from the key point, calculating a new ROI (region of interest) according to the offset and two adjacent ROI images;
step S2.5: and when the suspicious target is detected to enter the ROI image, judging that the suspicious target is out-of-range behavior.
Preferably, the target detection network adopts a Yolov5 deep network.
Preferably, the step S7 adopts:
step S7.1: when the current frame detects a suspicious target, recording the position of the current suspicious target, and matting according to the positioning result of the suspicious target to obtain a suspicious target subgraph;
step S7.2: and (3) repeatedly triggering the step S7.1 in subsequent continuous N frames, and when at least a preset number of frames have suspicious targets and all corresponding suspicious target subgraphs can be subjected to target matching, considering that the current detection result is stable, and performing tracking, behavior analysis and evidence obtaining on the stable targets.
Preferably, the behavioral analysis employs:
when the target position does not move within the observation time, the target belongs to a static suspicious target, and early warning is given, but the target is not temporarily used as an illegal fishing target;
when the track of the target is straight and passes through the monitoring area at a constant speed, the current ship belongs to the target in a normal form and cannot be used as an illegal fishing target;
when the track of the target is a broken line, the motion speed is not uniform, and the direction is variable, early warning and evidence obtaining are needed.
Preferably, the evidence is obtained by:
step S9: recording the real-time picture at the moment when the target is detected;
step S10: after a plurality of frames, behavior analysis finds that the behavior is not fish stealing behavior, and video recording is cancelled;
step S11: calculating to obtain an adjustment parameter of the lens according to the detection result of the suspicious target, so that the target is shown in a full view in the picture; if the size of the target exceeds the whole visual field, the lens needs to be zoomed out, and the target can be ensured to completely appear; if the size of the target is too small to make the details unclear, the lens needs to be pulled close to make the target fully fill the picture;
step S12: adjusting the azimuth angle of the camera in real time according to the target tracking result to enable the center of the picture to be coincident with the center of the target;
step S13: recording each frame of image of the moving picture of the target in a certain time as evidence-obtaining content.
According to the invention, the automatic detection system for illegal fishing based on the video comprises:
module M1: intercepting an image containing a suspicious target from an actually shot monitoring video, and marking the position of the suspicious target in the image in a rectangular frame form;
module M2: preprocessing the picture marked with the suspicious target to obtain a preprocessed picture;
module M3: training a target detection network by using the preprocessed picture marked with the suspicious target to obtain the trained target detection network;
module M4: extracting continuous multiframes based on the monitoring video, and performing background modeling by using an average background method to obtain a background image;
module M5: monitoring an acquired current picture in real time based on a camera, comparing the acquired current picture with a background picture pixel by pixel, and when the difference meets a preset condition, taking the pixel of which the difference meets the preset condition as a foreground pixel;
module M6: filtering connected domains which meet preset requirements in the foreground image, taking the remaining connected domains as moving targets, and acquiring the positions of the moving targets;
module M7: jointly analyzing the position of the obtained moving target and the position of a target detected based on the trained target detection network, and filtering the target if the detection result of the trained target detection network does not belong to the suspicious target; when the detection result of the trained target detection network belongs to a suspicious target, acquiring multi-frame pictures based on the camera, repeatedly triggering the module M5 to the module M7 until the repetition times reach a preset value, and determining the detection result;
module M8: and tracking the detection target, analyzing the tracked track, and determining that the detection target track meets the fishing characteristics and belongs to the fishing behavior when the detection target track meets the preset requirement.
Preferably, the module M1 employs: carrying out actual shooting through a visible light camera and an infrared camera to obtain a monitoring video of the actual shooting;
under the daytime condition, carrying out actual shooting through a visible light camera, obtaining a monitoring video of the actual shooting, and obtaining a shot suspicious target;
under the night condition, the infrared camera is used for actually shooting, the actually shot monitoring video is obtained, and the shot suspicious target is obtained.
Preferably, the module M2 employs:
module M2.1: setting monitoring key points, wherein each key point corresponds to an ROI (region of interest) image which is a target detection area;
module M2.2: intercepting a current video picture at a current key point, and drawing an ROI image in a polygonal mode by using an annotation tool;
module M2.3: when the lens of the camera is zoomed out or zoomed in, the corresponding ROI image is amplified or reduced according to the lens parameters;
module M2.4: when the camera carries out target tracking and causes the azimuth angle of the holder to deviate from the key point, calculating a new ROI (region of interest) according to the offset and two adjacent ROI images;
module M2.5: when the suspicious target is detected to enter the ROI image, judging that the suspicious target is out-of-range behavior;
the module M7 employs:
module M7.1: when the current frame detects a suspicious target, recording the position of the current suspicious target, and matting according to the positioning result of the suspicious target to obtain a suspicious target subgraph;
module M7.2: and in subsequent continuous N frames, repeatedly triggering the module M7.1, and when at least a preset number of frames have suspicious targets and all corresponding suspicious target subgraphs can be subjected to target matching, considering that the current detection result is stable, and performing tracking, behavior analysis and evidence obtaining on the stable targets.
Compared with the prior art, the invention has the following beneficial effects:
1. the automatic monitoring can be realized, the manpower is greatly reduced, and the law enforcement personnel are liberated from the boring monitoring task;
2. a large amount of deployment can be realized, the learning cost does not exist, and only software copying and installation are needed;
3. along with more and more use, the accumulated samples are more, and the performance of the algorithm can be greatly improved;
4. the evidence is automatically obtained in real time, so that the law enforcement is facilitated;
5. the fishing monitoring device can work all day long, and fish stealing monitoring can be performed at daytime and night;
6. the open water surface and the brook which is a complex water area can be considered;
7. various illegal behaviors such as fishing, net casting, steal catching of fishing boats and the like are monitored.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a flow chart of an automatic illegal fishing detection method based on video.
Fig. 2 shows a river channel in which a ship can travel.
Fig. 3 is a waterway of a non-drivable ship.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The invention provides a video-based automatic illegal fishing detection method, which is characterized in that the application scene is the automatic detection of illegal fishing, the particularity of the illegal fishing is that the track diversity in the fishing process of fishing boats, pedestrians and fishing boats with various shapes, including day time, night time, open water surface, mountain stream, and various shapes are required to be considered simultaneously, and the application scene is that the technical scheme of ship target detection cannot be transferred to the method, because the scheme of the ship target detection is decided based on a single image, more information cannot be acquired by combining real-time video content, and the method and the system for the automatic detection of illegal fishing based on the video are provided aiming at the particularity of the automatic detection of illegal fishing.
According to the present invention, a video-based illegal fishing automatic detection method is provided, as shown in fig. 1 to 3, and includes:
step S1: intercepting an image containing a suspicious target from an actually shot monitoring video, and marking the position of the suspicious target in the image in a rectangular frame form;
step S2: preprocessing the picture marked with the suspicious target to obtain a preprocessed picture;
step S3: training a target detection network by using the preprocessed picture marked with the suspicious target to obtain the trained target detection network;
step S4: extracting continuous multiframes based on the monitoring video, and performing background modeling by using an average background method to obtain a background image;
step S5: monitoring an acquired current picture in real time based on a camera, comparing the acquired current picture with a background picture pixel by pixel, and when the difference meets a preset condition, taking the pixel of which the difference meets the preset condition as a foreground pixel;
step S6: filtering connected domains which meet preset requirements in the foreground image, taking the remaining connected domains as moving targets, and acquiring the positions of the moving targets;
step S7: jointly analyzing the position of the obtained moving target and the position of a target detected based on the trained target detection network, and filtering the target if the detection result of the trained target detection network does not belong to the suspicious target; when the detection result of the trained target detection network belongs to a suspicious target, acquiring multi-frame pictures based on the camera, repeatedly triggering the steps S5 to S7 until the number of times of repetition reaches a preset value, and determining the detection result;
step S8: and tracking the detection target, analyzing the tracked track, and judging that the detection target track conforms to the fishing characteristics and belongs to the fishing behavior when the detection target track is not straight and turns back and forth in a certain area.
Specifically, the joint analysis employs: after obtaining the background image, the difference between the current image and the background image may obtain foreground regions, where each foreground region corresponds to a moving object, such as a ship, a person, a swinging tree, a large-area water ripple, or other things. However, the positions of the outer frames are not necessarily associated with the foreground connected domain, respectively, using the current map and the detection results (individual detection frames, inside of which are ships and pedestrians) after the target detection network processing. Because something other than a boat or a person exists, they are moving but not objects. In addition, background information such as houses on the shore and the like can be taken as targets and framed due to false alarms in target detection. Therefore, it is necessary to ensure that a foreground region with a larger area exists in each target detection frame.
Specifically, the step S1 employs: carrying out actual shooting through a visible light camera and an infrared camera to obtain a monitoring video of the actual shooting;
under the daytime condition, carrying out actual shooting through a visible light camera, obtaining a monitoring video of the actual shooting, and obtaining a shot suspicious target;
under the night condition, the infrared camera is used for actually shooting, the actually shot monitoring video is obtained, and the shot suspicious target is obtained.
Specifically, the step S2 employs:
step S2.1: setting monitoring key points, wherein each key point corresponds to an ROI (region of interest) image which is a target detection area;
step S2.2: intercepting a current video picture at a current key point, and drawing an ROI image in a polygonal mode by using an annotation tool;
step S2.3: when the lens of the camera is zoomed out or zoomed in, the corresponding ROI image is amplified or reduced according to the lens parameters;
step S2.4: when the camera carries out target tracking and causes the azimuth angle of the holder to deviate from the key point, calculating a new ROI (region of interest) according to the offset and two adjacent ROI images;
step S2.5: and when the suspicious target is detected to enter the ROI image, judging that the suspicious target is out-of-range behavior.
Specifically, the target detection network adopts a Yolov5 deep network.
Specifically, the step S7 employs:
step S7.1: when the current frame detects a suspicious target, recording the position of the current suspicious target, and matting according to the positioning result of the suspicious target to obtain a suspicious target subgraph;
step S7.2: and (3) repeatedly triggering the step S7.1 in subsequent continuous N frames, and when at least a preset number of frames have suspicious targets and all corresponding suspicious target subgraphs can be subjected to target matching, considering that the current detection result is stable, and performing tracking, behavior analysis and evidence obtaining on the stable targets.
Specifically, N frames show targets, and the subgraph of the fishing boat in each frame is cut out. And then, matching the fishing boats in the two adjacent frames according to the time sequence. That is, a fishing vessel in the current frame can find the target in the next frame near its position, and the two sub-images can match up when matching is made based on the depth features. From front to back in time, every two images can be matched, so that all the N sub-images are the same ship and are considered to be stable. Otherwise, it is unstable.
Specifically, the behavior analysis employs:
when the target position does not move within the observation time, the target belongs to a static suspicious target, and early warning is given, but the target is not temporarily used as an illegal fishing target;
when the track of the target is straight and passes through the monitoring area at a constant speed, the current ship belongs to the target in a normal form and cannot be used as an illegal fishing target;
when the track of the target is a broken line, the motion speed is not uniform, and the direction is variable, early warning and evidence obtaining are needed.
Specifically, the evidence obtaining adopts:
step S9: recording the real-time picture at the moment when the target is detected;
step S10: after a plurality of frames, behavior analysis finds that the behavior is not fish stealing behavior, and video recording is cancelled;
step S11: calculating to obtain an adjustment parameter of the lens according to the detection result of the suspicious target, so that the target is shown in a full view in the picture; if the size of the target exceeds the whole visual field, the lens needs to be zoomed out, and the target can be ensured to completely appear; if the size of the target is too small to make the details unclear, the lens needs to be pulled close to make the target fully fill the picture;
step S12: adjusting the azimuth angle of the camera in real time according to the target tracking result to enable the center of the picture to be coincident with the center of the target;
step S13: recording each frame of image of the moving picture of the target in a certain time as evidence-obtaining content.
According to the invention, the automatic detection system for illegal fishing based on the video comprises:
module M1: intercepting an image containing a suspicious target from an actually shot monitoring video, and marking the position of the suspicious target in the image in a rectangular frame form;
module M2: preprocessing the picture marked with the suspicious target to obtain a preprocessed picture;
module M3: training a target detection network by using the preprocessed picture marked with the suspicious target to obtain the trained target detection network;
module M4: extracting continuous multiframes based on the monitoring video, and performing background modeling by using an average background method to obtain a background image;
module M5: monitoring an acquired current picture in real time based on a camera, comparing the acquired current picture with a background picture pixel by pixel, and when the difference meets a preset condition, taking the pixel of which the difference meets the preset condition as a foreground pixel;
module M6: filtering connected domains which meet preset requirements in the foreground image, taking the remaining connected domains as moving targets, and acquiring the positions of the moving targets;
module M7: jointly analyzing the position of the obtained moving target and the position of a target detected based on the trained target detection network, and filtering the target if the detection result of the trained target detection network does not belong to the suspicious target; when the detection result of the trained target detection network belongs to a suspicious target, acquiring multi-frame pictures based on the camera, repeatedly triggering the module M5 to the module M7 until the repetition times reach a preset value, and determining the detection result;
module M8: and tracking the detection target, analyzing the tracked track, and judging that the detection target track conforms to the fishing characteristics and belongs to the fishing behavior when the detection target track is not straight and turns back and forth in a certain area.
Specifically, the joint analysis employs: after obtaining the background image, the difference between the current image and the background image may obtain foreground regions, where each foreground region corresponds to a moving object, such as a ship, a person, a swinging tree, a large-area water ripple, or other things. However, the positions of the outer frames are not necessarily associated with the foreground connected domain, respectively, using the current map and the detection results (individual detection frames, inside of which are ships and pedestrians) after the target detection network processing. Because something other than a boat or a person exists, they are moving but not objects. In addition, background information such as houses on the shore and the like can be taken as targets and framed due to false alarms in target detection. Therefore, it is necessary to ensure that a foreground region with a larger area exists in each target detection frame.
Specifically, the module M1 employs: carrying out actual shooting through a visible light camera and an infrared camera to obtain a monitoring video of the actual shooting;
under the daytime condition, carrying out actual shooting through a visible light camera, obtaining a monitoring video of the actual shooting, and obtaining a shot suspicious target;
under the night condition, the infrared camera is used for actually shooting, the actually shot monitoring video is obtained, and the shot suspicious target is obtained.
Specifically, the module M2 employs:
module M2.1: setting monitoring key points, wherein each key point corresponds to an ROI (region of interest) image which is a target detection area;
module M2.2: intercepting a current video picture at a current key point, and drawing an ROI image in a polygonal mode by using an annotation tool;
module M2.3: when the lens of the camera is zoomed out or zoomed in, the corresponding ROI image is amplified or reduced according to the lens parameters;
module M2.4: when the camera carries out target tracking and causes the azimuth angle of the holder to deviate from the key point, calculating a new ROI (region of interest) according to the offset and two adjacent ROI images;
module M2.5: and when the suspicious target is detected to enter the ROI image, judging that the suspicious target is out-of-range behavior.
Specifically, the target detection network adopts a Yolov5 deep network.
Specifically, the module M7 employs:
module M7.1: when the current frame detects a suspicious target, recording the position of the current suspicious target, and matting according to the positioning result of the suspicious target to obtain a suspicious target subgraph;
module M7.2: and in subsequent continuous N frames, repeatedly triggering the module M7.1, and when at least a preset number of frames have suspicious targets and all corresponding suspicious target subgraphs can be subjected to target matching, considering that the current detection result is stable, and performing tracking, behavior analysis and evidence obtaining on the stable targets.
Specifically, N frames show targets, and the subgraph of the fishing boat in each frame is cut out. And then, matching the fishing boats in the two adjacent frames according to the time sequence. That is, a fishing vessel in the current frame can find the target in the next frame near its position, and the two sub-images can match up when matching is made based on the depth features. From front to back in time, every two images can be matched, so that all the N sub-images are the same ship and are considered to be stable. Otherwise, it is unstable.
Specifically, the behavior analysis employs:
when the target position does not move within the observation time, the target belongs to a static suspicious target, and early warning is given, but the target is not temporarily used as an illegal fishing target;
when the track of the target is straight and passes through the monitoring area at a constant speed, the current ship belongs to the target in a normal form and cannot be used as an illegal fishing target;
when the track of the target is a broken line, the motion speed is not uniform, and the direction is variable, early warning and evidence obtaining are needed.
Specifically, the evidence obtaining adopts:
module M9: recording the real-time picture at the moment when the target is detected;
module M10: after a plurality of frames, behavior analysis finds that the behavior is not fish stealing behavior, and video recording is cancelled;
module M11: calculating to obtain an adjustment parameter of the lens according to the detection result of the suspicious target, so that the target is shown in a full view in the picture; if the size of the target exceeds the whole visual field, the lens needs to be zoomed out, and the target can be ensured to completely appear; if the size of the target is too small to make the details unclear, the lens needs to be pulled close to make the target fully fill the picture;
module M12: adjusting the azimuth angle of the camera in real time according to the target tracking result to enable the center of the picture to be coincident with the center of the target;
module M13: recording each frame of image of the moving picture of the target in a certain time as evidence-obtaining content.
Example 2
Example 2 is a preferred example of example 1
Fishing boat target detection
Two modes of operation: visible light and infrared light
The camera used by the method consists of a visible light camera and an infrared camera. Under the daytime condition, the visible light camera can shoot the clear color information of the ship. At night, the visible light camera cannot work, and at the moment, the infrared camera needs to be relied on. Therefore, the target detection and tracking method of the invention relates to two modes of visible light and infrared light.
Collection and labelling of samples
And (4) intercepting pictures containing the fishing boat from the actually shot monitoring video, and marking the positions of the pictures in the images in a rectangular frame mode. The sample library is divided into two subsets of infrared and visible light. Each subset, a variety of different fishing vessels are classified, totaling about 10 fishing vessels.
Training of target detection networks
And training the fishing boat samples collected in the previous steps on the basis of the Yolov5 deep network to obtain a detection network.
Deployment of deep networks
The tenov 5 network is deployed on the engida GPU, using the tensorRT technique as a basis to enable extreme reasoning speeds on the hardware.
Automatic cruise of cradle head
Because the camera is installed on the cloud platform, consequently only through the rotation of cloud platform, can cover whole surface of water region to realize complete control. In this case, the automatic cruise logic of the head becomes critical. The logic needs to satisfy simultaneously: automatic cruising and manpower liberation; both full coverage and no missing targets are required.
Setting monitoring key points
The pan-tilt is usually installed on one side of the river bank, the viewing angle range of the monitored area generally does not exceed 180 degrees, which is equivalent to 3-4 picture contents of the camera (3-4 non-overlapped pictures, which together just cover the whole monitored area), so that several critical monitoring points can be set. Each monitoring point corresponds to an orientation of the holder (including an azimuth angle, a pitch angle and a visual field parameter of the lens). In the pitch angle direction, the angle range occupied by rivers and lakes in the visual field is basically within one image, so the pitch angle only needs to be set to a fixed value.
Basic cruise mode
After the system starts the automatic monitoring mode, the cradle head will poll each key point: according to the sequence from left to right, staying at each key point position for a certain time (such as 1 minute) for automatic detection; then rotating to the next key point at a constant speed (the rotating time between two key points is within 10 seconds), and monitoring for 1 minute in the same way; then polling from right to left; thus, the cycle is repeated.
Cruise logic in marine situations
And if the fishing boat is detected in the view picture corresponding to the current key point through the target detection network, entering a tracking mode. By adjusting the azimuth angle of the camera in real time, the center of the visual field is ensured to coincide with the center of the target. Then in a new video frame, the detection of the target continues and the updating of the camera azimuth angle is maintained.
Consideration of non-ship area
In order to ensure the continuous tracking of ships appearing on the river surface and simultaneously consider other areas on the river surface to prevent the leakage of ships in other areas, the following method can be adopted:
after tracking and evidence obtaining are carried out on the current target for 10 minutes (10 minutes is enough for judging illegal behaviors of a fishing boat and is enough for law enforcement judgment), the mode of the holder is switched to a basic mode, namely, the holder is rotated to a monitoring key point nearest to the current target, whether the target exists is detected, and if yes, tracking and evidence obtaining are carried out on the target; if there is no target, then rotate to the next key point.
If no other target is left on the whole river surface except the currently tracked target, returning to the current target for tracking and evidence obtaining.
Automatic control of lens
According to the target detection, obtaining the target size, and correspondingly converting to obtain the adjusting parameter of the lens
From the result of the target detection (rectangular outer frame), the pixel width of the target is calculated, and the ratio thereof to the screen width is further calculated. Then, the amount of change of the lens parameter with respect to the current parameter when the target is adjusted to 80% of the screen width is calculated. The lens is controlled by the amount of change.
Adjusting the lens to enable the target (fishing boat) to show the full appearance in the picture to the maximum extent, improving the ratio of the fishing boat in the picture and reducing the ratio of the background area as much as possible;
if the size of the target is too large and exceeds the visual field range, the visual field should be enlarged to enclose the target.
Background modeling, moving object extraction and false alarm object filtering
In rare cases, the target detector will make a false alarm, resulting in a fishing boat in a background area (e.g., a non-water area on a hill, a building, etc.), or a stationary background on a river surface being detected by a false alarm. Considering that the camera is fixed on the holder and the background content has certain regularity, the following strategy can be considered to reduce the false alarm rate:
when the holder rotates to each key point position, the camera stays for a certain time. At this time, background modeling may be performed by extracting consecutive frames (for example, 100 frames) and using an average background method (multiple frames calculate an average value pixel by pixel, which is used as a pixel value of the background) to obtain a background map;
comparing the current image with the background image pixel by pixel to obtain pixels with larger difference as foreground pixels;
filtering small-area connected domains in the foreground image, and taking the remaining large-area connected domains as moving targets;
and (3) jointly analyzing the position of the moving target and the target position detected based on the yolov5 network, and if the detection result of the yolov5 is not on the moving target, the moving target is a false alarm target and should be filtered.
Confirmation combined with multi-frame detection results
Since a false alarm phenomenon (a fishing boat is detected by mistake without a background area of the boat) is inevitably generated during target detection, although the probability is low, early warning is performed without identification, and extremely bad influence is brought to user experience. The invention adopts the following method to reduce the false alarm phenomenon:
after the target is detected by the current frame, recording the position of the target, and carrying out matting according to the positioning result to obtain a fishing boat subgraph;
in subsequent continuous N frames (for example, N is 5), continuing target detection, and matting according to a detection result to obtain a fishing boat subgraph;
if at least 80% of the frames in the N frames have targets, and all the corresponding sub-images can be subjected to target matching (the two sub-images are compared pixel by pixel, the total difference value is smaller than the threshold value, the matching is considered to be successful), the sub-images are considered not to be false alarms, and the detection result is stable;
for a stable target, the subsequent tracking, behavior analysis and evidence obtaining links can be carried out;
and if the target is unstable, filtering the detection result.
Continuous tracking of fishing vessel targets
The invention adopts the following strategies for tracking the fishing boat:
two adjacent frames are respectively cut to obtain sub-images according to target detection results;
directly carrying out pixel-by-pixel comparison on the two sub-images, and calculating the total pixel difference; and if the total pixel difference is smaller than the threshold value, the two sub-images are considered to correspond to the same ship. Taking the central point position of the latest sub-graph as the current position of the fishing boat;
if there is no match, there is a possibility that the detection result has a positional deviation. At the moment, the best matching position is obtained again through a certain offset at the latest target center point position;
if the fishing boat is not matched with the fishing boat, the fishing boat can be changed due to turning, shielding and the like; at this time, the matching method needs to be changed, specifically: still comparing pixel by pixel, observing whether a difference area with a larger area exists in the difference image, if so, shielding the ship, and matching and determining a new target position according to the remaining non-shielded area; if the occlusion area does not exist, the fishing boat is likely to be deformed, and the matching needs to be carried out according to the following method;
and extracting respective depth features corresponding to the two sub-images from the depth network, and then calculating the Euclidean distance between the two sub-images. If the Euclidean distance is smaller than the threshold value, the same ship is considered.
ROI region setting and border crossing detection
Fishing vessels are present on the water surface and therefore the monitoring range is limited to the water surface and the area around the water surface (the river bank) and target detection should not be done for any area of the whole map. In order to reduce the interference caused by the extraneous region, the ROI may be set. The ROI represents a region of interest, and the setting and using method comprises the following steps:
each key point corresponds to one ROI image; (each monitoring site, needs to be mapped specifically)
At the current key point, a user intercepts the current video picture by using a marking tool, and then draws the ROI in a polygon mode by using a mouse. The interior of the ROI is a detection area, and the exterior does not need to be detected;
calling a corresponding ROI image according to the position of a key point of the camera when the camera navigates;
when the camera tracks a target to cause the azimuth angle of the pan-tilt to deviate from a key point, a new ROI area needs to be calculated according to the offset and two adjacent ROI images (a plurality of ROI images can be regarded as a complete large image, and the ROI corresponding to the current azimuth angle can be flexibly intercepted according to the azimuth angle)
When the lens of the camera is zoomed out and zoomed in, the ROI is correspondingly enlarged and reduced according to the parameters of the lens
Once the fishing boat is detected to enter the ROI, the fishing boat can be judged to be out-of-range behavior, and corresponding subsequent operation is carried out.
Trace-based behavior analysis
After the position of the target and the motion trajectory within a period of time are detected and tracked by the foregoing steps, the trajectory needs to be subjected to behavior analysis to further determine whether the trajectory is suspicious. The specific method comprises the following steps:
if the target position does not move within the observation time, the target position belongs to a static ship, early warning can be given, and the target position is not temporarily used as a fishing ship for illegal fishing;
if the track of the target is straight and passes through the monitoring area at a constant speed, the ship belongs to a target in a normal form and cannot be used as an illegal fishing ship;
if the track of the target is a broken line, the movement speed is not uniform (the target stops moving for a moment), and the direction is changeable, the situation is very likely to be an illegal fishing boat, and early warning and evidence obtaining are needed at the moment.
Automatic evidence obtaining
After the detection, tracking and behavioral analysis of the fishing vessel is completed, evidence must be taken once a suspect vessel is determined. Evidence collection is a key step of law enforcement and is indispensable. The system adopts the following method to obtain evidence:
recording the real-time picture at the moment when the fishing boat is detected;
after a plurality of frames, if the fishing behavior is not found through analysis, the video recording is cancelled;
calculating to obtain the adjustment parameters of the lens according to the detection result (the size of the outer frame) of the fishing boat, so that the fishing boat is shown in a full view in the picture; if the size of the fishing boat exceeds the whole visual field, the lens needs to be extended far, so that the fishing boat can be ensured to completely appear; if the fishing boat is too small to see details clearly, the lens needs to be drawn close to fully fill the picture;
adjusting the azimuth angle of the camera in real time according to the ship tracking result to enable the picture center to coincide with the target center;
recording each frame of image of the movable picture of the fishing boat within a certain time as evidence obtaining content.
Detection, tracking and forensics of pedestrian targets
Besides fishing boats, pedestrians are also targets of high concern. The pedestrian detection method also adopts yolov5 deep network for training; the same tracking method is used for the fishing boat tracking. In the monitored area, video recording evidence collection is carried out as long as suspicious pedestrians are found. Fishing boats and pedestrians can be judged as fish stealing behaviors as long as one target appears.
Infrared light and visible light mode switching
Because the visible light cannot work at night, a switching method needs to be set for two working modes:
in the daytime, 4 am and 8 pm, a visible light working mode is adopted to perform target detection, tracking and behavior analysis on the image in the visible light camera.
And in the night condition from 5 pm to 7 am, performing target detection, tracking and behavior analysis on the image in the infrared camera by adopting an infrared working mode.
Under the condition that two modes are overlapped in time, for example, 4 to 7 am and 5 to 8 pm, the two working modes work simultaneously, and evidence obtaining and early warning can be performed as long as one channel detects the illegally caught fishing boat.
Different treatment methods for navigable river channel and non-navigable river channel
In the process of protecting fishery resources and prohibiting fishery stealing, the monitoring range is not limited to deeper river channels, but also includes shallower river channels such as streams in remote mountains. In a brook non-navigable river channel, the detection standard for the fish stealing phenomenon can be wider. By setting the water surface area, evidence collection and early warning can be performed as long as a pedestrian target appears in the area (and the periphery thereof, such as a river bank). Because normally there are very few pedestrians around these waters.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A video-based illegal fishing automatic detection method is characterized by comprising the following steps:
step S1: intercepting an image containing a suspicious target from an actually shot monitoring video, and marking the position of the suspicious target in the image in a rectangular frame form;
step S2: preprocessing the picture marked with the suspicious target to obtain a preprocessed picture;
step S3: training a target detection network by using the preprocessed picture marked with the suspicious target to obtain the trained target detection network;
step S4: extracting continuous multiframes based on the monitoring video, and performing background modeling by using an average background method to obtain a background image;
step S5: monitoring an acquired current picture in real time based on a camera, comparing the acquired current picture with a background picture pixel by pixel, and when the difference meets a preset condition, taking the pixel of which the difference meets the preset condition as a foreground pixel;
step S6: filtering connected domains which meet preset requirements in the foreground image, taking the remaining connected domains as moving targets, and acquiring the positions of the moving targets;
step S7: jointly analyzing the position of the obtained moving target and the position of a target detected based on the trained target detection network, and filtering the target if the detection result of the trained target detection network does not belong to the suspicious target; when the detection result of the trained target detection network belongs to a suspicious target, acquiring multi-frame pictures based on the camera, repeatedly triggering the steps S5 to S7 until the number of times of repetition reaches a preset value, and determining the detection result;
step S8: and tracking the detection target, analyzing the tracked track, and determining that the detection target track meets the fishing characteristics and belongs to the fishing behavior when the detection target track meets the preset requirement.
2. The automatic video-based illegal fishing detection method according to claim 1, wherein the step S1 employs: carrying out actual shooting through a visible light camera and an infrared camera to obtain a monitoring video of the actual shooting;
under the daytime condition, carrying out actual shooting through a visible light camera, obtaining a monitoring video of the actual shooting, and obtaining a shot suspicious target;
under the night condition, the infrared camera is used for actually shooting, the actually shot monitoring video is obtained, and the shot suspicious target is obtained.
3. The automatic video-based illegal fishing detection method according to claim 1, wherein the step S2 employs:
step S2.1: setting monitoring key points, wherein each key point corresponds to an ROI (region of interest) image which is a target detection area;
step S2.2: intercepting a current video picture at a current key point, and drawing an ROI image in a polygonal mode by using an annotation tool;
step S2.3: when the lens of the camera is zoomed out or zoomed in, the corresponding ROI image is amplified or reduced according to the lens parameters;
step S2.4: when the camera carries out target tracking and causes the azimuth angle of the holder to deviate from the key point, calculating a new ROI (region of interest) according to the offset and two adjacent ROI images;
step S2.5: and when the suspicious target is detected to enter the ROI image, judging that the suspicious target is out-of-range behavior.
4. The video-based illegal fishing automatic detection method according to claim 1, wherein the target detection network adopts a Yolov5 deep network.
5. The automatic video-based illegal fishing detection method according to claim 1, wherein the step S7 employs:
step S7.1: when the current frame detects a suspicious target, recording the position of the current suspicious target, and matting according to the positioning result of the suspicious target to obtain a suspicious target subgraph;
step S7.2: and (3) repeatedly triggering the step S7.1 in subsequent continuous N frames, and when at least a preset number of frames have suspicious targets and all corresponding suspicious target subgraphs can be subjected to target matching, considering that the current detection result is stable, and performing tracking, behavior analysis and evidence obtaining on the stable targets.
6. The video-based illegal fishing automatic detection method according to claim 5, characterized in that the behavior analysis employs:
when the target position does not move within the observation time, the target belongs to a static suspicious target, and early warning is given, but the target is not temporarily used as an illegal fishing target;
when the track of the target is straight and passes through the monitoring area at a constant speed, the current ship belongs to the target in a normal form and cannot be used as an illegal fishing target;
when the track of the target is a broken line, the motion speed is not uniform, and the direction is variable, early warning and evidence obtaining are needed.
7. The video-based illegal fishing automatic detection method according to claim 5, characterized in that the forensics adopts:
step S9: recording the real-time picture at the moment when the target is detected;
step S10: after a plurality of frames, behavior analysis finds that the behavior is not fish stealing behavior, and video recording is cancelled;
step S11: calculating to obtain an adjustment parameter of the lens according to the detection result of the suspicious target, so that the target is shown in a full view in the picture; if the size of the target exceeds the whole visual field, the lens needs to be zoomed out, and the target can be ensured to completely appear; if the size of the target is too small to make the details unclear, the lens needs to be pulled close to make the target fully fill the picture;
step S12: adjusting the azimuth angle of the camera in real time according to the target tracking result to enable the center of the picture to be coincident with the center of the target;
step S13: recording each frame of image of the moving picture of the target in a certain time as evidence-obtaining content.
8. An automatic illegal fishing detection system based on videos is characterized by comprising:
module M1: intercepting an image containing a suspicious target from an actually shot monitoring video, and marking the position of the suspicious target in the image in a rectangular frame form;
module M2: preprocessing the picture marked with the suspicious target to obtain a preprocessed picture;
module M3: training a target detection network by using the preprocessed picture marked with the suspicious target to obtain the trained target detection network;
module M4: extracting continuous multiframes based on the monitoring video, and performing background modeling by using an average background method to obtain a background image;
module M5: monitoring an acquired current picture in real time based on a camera, comparing the acquired current picture with a background picture pixel by pixel, and when the difference meets a preset condition, taking the pixel of which the difference meets the preset condition as a foreground pixel;
module M6: filtering connected domains which meet preset requirements in the foreground image, taking the remaining connected domains as moving targets, and acquiring the positions of the moving targets;
module M7: jointly analyzing the position of the obtained moving target and the position of a target detected based on the trained target detection network, and filtering the target if the detection result of the trained target detection network does not belong to the suspicious target; when the detection result of the trained target detection network belongs to a suspicious target, acquiring multi-frame pictures based on the camera, repeatedly triggering the module M5 to the module M7 until the repetition times reach a preset value, and determining the detection result;
module M8: and tracking the detection target, analyzing the tracked track, and determining that the detection target track meets the fishing characteristics and belongs to the fishing behavior when the detection target track meets the preset requirement.
9. The video-based illegal fishing automatic detection system according to claim 8, wherein said module M1 employs: carrying out actual shooting through a visible light camera and an infrared camera to obtain a monitoring video of the actual shooting;
under the daytime condition, carrying out actual shooting through a visible light camera, obtaining a monitoring video of the actual shooting, and obtaining a shot suspicious target;
under the night condition, the infrared camera is used for actually shooting, the actually shot monitoring video is obtained, and the shot suspicious target is obtained.
10. The video-based illegal fishing automatic detection system according to claim 8, wherein said module M2 employs:
module M2.1: setting monitoring key points, wherein each key point corresponds to an ROI (region of interest) image which is a target detection area;
module M2.2: intercepting a current video picture at a current key point, and drawing an ROI image in a polygonal mode by using an annotation tool;
module M2.3: when the lens of the camera is zoomed out or zoomed in, the corresponding ROI image is amplified or reduced according to the lens parameters;
module M2.4: when the camera carries out target tracking and causes the azimuth angle of the holder to deviate from the key point, calculating a new ROI (region of interest) according to the offset and two adjacent ROI images;
module M2.5: when the suspicious target is detected to enter the ROI image, judging that the suspicious target is out-of-range behavior;
the module M7 employs:
module M7.1: when the current frame detects a suspicious target, recording the position of the current suspicious target, and matting according to the positioning result of the suspicious target to obtain a suspicious target subgraph;
module M7.2: and in subsequent continuous N frames, repeatedly triggering the module M7.1, and when at least a preset number of frames have suspicious targets and all corresponding suspicious target subgraphs can be subjected to target matching, considering that the current detection result is stable, and performing tracking, behavior analysis and evidence obtaining on the stable targets.
CN202111328940.XA 2021-11-10 2021-11-10 Video-based illegal fishing automatic detection method and system Pending CN113936029A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111328940.XA CN113936029A (en) 2021-11-10 2021-11-10 Video-based illegal fishing automatic detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111328940.XA CN113936029A (en) 2021-11-10 2021-11-10 Video-based illegal fishing automatic detection method and system

Publications (1)

Publication Number Publication Date
CN113936029A true CN113936029A (en) 2022-01-14

Family

ID=79286421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111328940.XA Pending CN113936029A (en) 2021-11-10 2021-11-10 Video-based illegal fishing automatic detection method and system

Country Status (1)

Country Link
CN (1) CN113936029A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972740A (en) * 2022-07-29 2022-08-30 上海鹰觉科技有限公司 Automatic ship sample collection method and system
CN115051990A (en) * 2022-06-28 2022-09-13 慧之安信息技术股份有限公司 Subway station monitoring method based on edge calculation
CN116309729A (en) * 2023-02-20 2023-06-23 珠海视熙科技有限公司 Target tracking method, device, terminal, system and readable storage medium
CN116743970A (en) * 2023-08-14 2023-09-12 安徽塔联智能科技有限责任公司 Intelligent management platform with video AI early warning analysis

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115051990A (en) * 2022-06-28 2022-09-13 慧之安信息技术股份有限公司 Subway station monitoring method based on edge calculation
CN114972740A (en) * 2022-07-29 2022-08-30 上海鹰觉科技有限公司 Automatic ship sample collection method and system
CN116309729A (en) * 2023-02-20 2023-06-23 珠海视熙科技有限公司 Target tracking method, device, terminal, system and readable storage medium
CN116743970A (en) * 2023-08-14 2023-09-12 安徽塔联智能科技有限责任公司 Intelligent management platform with video AI early warning analysis
CN116743970B (en) * 2023-08-14 2023-11-21 安徽塔联智能科技有限责任公司 Intelligent management platform with video AI early warning analysis

Similar Documents

Publication Publication Date Title
CN113936029A (en) Video-based illegal fishing automatic detection method and system
CN111967393B (en) Safety helmet wearing detection method based on improved YOLOv4
US9412027B2 (en) Detecting anamolous sea-surface oil based on a synthetic discriminant signal and learned patterns of behavior
Cutter et al. Automated detection of rockfish in unconstrained underwater videos using haar cascades and a new image dataset: Labeled fishes in the wild
Kang et al. Real-time video tracking using PTZ cameras
CN108806334A (en) A kind of intelligent ship personal identification method based on image
CN109409283A (en) A kind of method, system and the storage medium of surface vessel tracking and monitoring
US20150104064A1 (en) Method and system for detection of foreign objects in maritime environments
Bedruz et al. Real-time vehicle detection and tracking using a mean-shift based blob analysis and tracking approach
CN113507577A (en) Target object detection method, device, equipment and storage medium
CN113239854B (en) Ship identity recognition method and system based on deep learning
CN112287823A (en) Facial mask identification method based on video monitoring
CN115346155A (en) Ship image track extraction method for visual feature discontinuous interference
CN115240086A (en) Unmanned aerial vehicle-based river channel ship detection method, device, equipment and storage medium
Sharma et al. SharkSpotter: Shark detection with drones for human safety and environmental protection
Fahn et al. Abnormal maritime activity detection in satellite image sequences using trajectory features
CN110942577A (en) Machine vision-based river sand stealing monitoring system and method
CN112307943B (en) Water area man-boat target detection method, system, terminal and medium
CN116886874A (en) Ecological garden security monitoring and early warning data acquisition method and system
CN114972740A (en) Automatic ship sample collection method and system
CN112487854A (en) Application method for processing frontier defense video
CN102244776B (en) Automatic tracking laser thermal imaging monitoring system
Kim et al. Vessel tracking vision system using a combination of Kaiman filter, Bayesian classification, and adaptive tracking algorithm
KR102638384B1 (en) 3-Dimensional moving object detecting apparatus using stationary lidar and method thereof
Ju et al. An improved mixture Gaussian models to detect moving object under real-time complex background

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination