CN116778673A - Water area safety monitoring method, system, terminal and storage medium - Google Patents

Water area safety monitoring method, system, terminal and storage medium Download PDF

Info

Publication number
CN116778673A
CN116778673A CN202311034804.9A CN202311034804A CN116778673A CN 116778673 A CN116778673 A CN 116778673A CN 202311034804 A CN202311034804 A CN 202311034804A CN 116778673 A CN116778673 A CN 116778673A
Authority
CN
China
Prior art keywords
person object
shortest distance
water
person
water area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311034804.9A
Other languages
Chinese (zh)
Inventor
崔海东
陈翔
苗发强
何磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongjian Technology Co ltd
Original Assignee
Tongjian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongjian Technology Co ltd filed Critical Tongjian Technology Co ltd
Priority to CN202311034804.9A priority Critical patent/CN116778673A/en
Publication of CN116778673A publication Critical patent/CN116778673A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/08Alarms for ensuring the safety of persons responsive to the presence of persons in a body of water, e.g. a swimming pool; responsive to an abnormal condition of a body of water
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19608Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19613Recognition of a predetermined image pattern or behaviour pattern indicating theft or intrusion
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19639Details of the system layout
    • G08B13/19645Multiple cameras, each having view on one of a plurality of scenes, e.g. multiple cameras for multi-room surveillance or for tracking an object by view hand-over
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of deep learning, and particularly provides a water area safety monitoring method, a system, a terminal and a storage medium, wherein the method comprises the following steps: acquiring monitoring video stream data; identifying a person object and position information from the surveillance video stream data using the target detection model and the target tracking model; intercepting a scene picture where the character object is located based on the character object and the position information, and marking a water-containing area in the scene picture; predicting the action track of the person object by using the behavior detection model; and calculating the distance between the person object and the water area based on the position information of the person object, and generating an alarm event when the distance reaches a preset threshold value and the action track faces the water area. According to the invention, the person identification, the determination of the distance water area and the determination of the behavior track are carried out by combining the target detection technology, the target tracking technology and the behavior detection technology, so that the warning is raised by integrating multiple factors, and the warning accuracy is improved.

Description

Water area safety monitoring method, system, terminal and storage medium
Technical Field
The invention belongs to the technical field of deep learning, and particularly relates to a water area safety monitoring method, a system, a terminal and a storage medium.
Background
The supervision capability of dangerous waters such as reservoir, river and lake shoreline, deep water pool is not enough, lacks effective safety precaution facility, leads to child, student and other masses to be close to dangerous waters when, can not carry out timely safety warning, and personnel management and control degree of difficulty is big, can't take rescue measures in the first time of accident emergence.
In the traditional method, a camera is installed in a dangerous water area, and a monitoring video stored for a period of time is convenient to fetch and analyze. This approach lacks real-time analysis, real-time alarm capability, and is difficult to reduce potential safety hazards. In order to improve the management quality, some systems are equipped with image recognition models, such as a neural network model, a target detection model and the like, and perform person recognition on a monitoring video through the image recognition models, and immediately generate an alarm once a person target is detected.
The existing safety monitoring method can generate an alarm, but has high false alarm rate, and can generate an alarm when people come and go nearby.
Disclosure of Invention
Aiming at the problem of high false alarm rate in the prior art, the invention provides a water area safety monitoring method, a system, a terminal and a storage medium, which are used for solving the technical problems.
In a first aspect, the invention provides a water area safety monitoring method, comprising the steps of:
acquiring monitoring video stream data;
identifying a person object and position information from the surveillance video stream data using the target detection model and the target tracking model;
intercepting a scene picture where the character object is located based on the character object and the position information, and marking a water-containing area in the scene picture;
predicting the action track of the person object by using the behavior detection model;
and calculating the distance between the person object and the water area based on the position information of the person object, and generating an alarm event when the distance reaches a preset threshold value and the action track faces the water area.
In an alternative embodiment, identifying person objects and location information from surveillance video stream data using a target detection model and a target tracking model includes:
acquiring video frames from the monitoring video stream data;
detecting a person object in the video frame by using the target detection model;
extracting features in the frame of the detected person object from the video frame, the features including apparent features and motion features;
calculating the matching degree between the character objects of the front frame and the rear frame, and distributing an ID for the character objects after confirming that the matching degree reaches a set threshold;
a coordinate system is created for the video frame and coordinates of the person object in the coordinate system are acquired.
In an alternative embodiment, intercepting a scene picture where the person object is located based on the person object and the position information, and marking a water-containing area in the scene picture, includes:
presetting a range parameter of a picture;
taking the coordinates of the person object as the center, and intercepting a scene picture from the video frame based on the range parameter;
and identifying the watery area in the scene picture, and extracting the contour line of the watery area.
In an alternative embodiment, calculating a distance between the person object and the water-bearing area based on the position information of the person object, and generating an alarm event when the distance reaches a preset threshold and the action track is toward the water-bearing area, includes:
calculating a first shortest distance between the person object and the outline of the water area based on the position information of the person object;
extracting the latest predicted position from the motion trail of the person object, and calculating a second shortest distance between the predicted position and the outline of the water area;
judging whether the first shortest distance reaches a preset threshold value or not:
if yes, comparing the first shortest distance with the second shortest distance, and if the first shortest distance is larger than the second shortest distance, generating an alarm event.
In a second aspect, the invention provides a water safety monitoring system comprising:
the video acquisition module is used for acquiring monitoring video stream data;
the target recognition module is used for recognizing the person object and the position information from the monitoring video stream data by utilizing the target detection model and the target tracking model;
the target intercepting module is used for intercepting a scene picture where the character object is located based on the character object and the position information and marking a water area in the scene picture;
the track prediction module is used for predicting the action track of the person object by utilizing the behavior detection model;
and the alarm generation module is used for calculating the distance between the person object and the water area based on the position information of the person object, and generating an alarm event when the distance reaches a preset threshold value and the action track faces the water area.
In an alternative embodiment, the object recognition module includes:
the video intercepting unit is used for acquiring video frames from the monitoring video stream data;
a person detection unit for detecting a person object in the video frame using the target detection model;
a feature extraction unit for extracting features in a frame of the detected person object from the video frame, the features including apparent features and motion features;
an ID allocation unit for calculating the feature matching degree between the character objects of the front and rear frames, and allocating an ID to the character object after confirming that the matching degree reaches a set threshold;
and the target positioning unit is used for creating a coordinate system for the video frame and acquiring the coordinates of the person object in the coordinate system.
In an alternative embodiment, the object intercept module includes:
the range setting unit is used for presetting range parameters of the intercepted pictures;
the picture intercepting unit is used for intercepting a scene picture from a video frame based on the range parameter by taking the coordinates of the character object as the center;
and the region identification unit is used for identifying the watery region in the scene picture and extracting the contour line of the watery region.
In an alternative embodiment, the alert generation module includes:
a first calculation unit for calculating a first shortest distance between the person object and the outline of the water-containing area based on the position information of the person object;
a second calculation unit for extracting the latest predicted position from the motion trail of the person object, and calculating a second shortest distance between the predicted position and the outline of the water area;
the threshold judging unit is used for judging whether the first shortest distance reaches a preset threshold or not;
and the distance comparison unit is used for comparing the first shortest distance with the second shortest distance if the first shortest distance reaches a preset threshold value, and generating an alarm event if the first shortest distance is larger than the second shortest distance.
In a third aspect, a terminal is provided, including:
a processor, a memory, wherein,
the memory is used for storing a computer program,
the processor is configured to call and run the computer program from the memory, so that the terminal performs the method of the terminal as described above.
In a fourth aspect, there is provided a computer storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of the above aspects.
The water area safety monitoring method, system, terminal and storage medium have the advantages that the person identification, the distance water area judgment and the behavior track judgment are carried out by combining the target detection technology, the target tracking technology and the behavior detection technology, the warning is raised by integrating multiple factors, and the warning accuracy is improved.
In addition, the invention has reliable design principle, simple structure and very wide application prospect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic flow chart of a method of one embodiment of the invention.
Fig. 2 is another schematic flow chart of a method of one embodiment of the invention.
FIG. 3 is a schematic block diagram of a system of one embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the technical solution of the present invention better understood by those skilled in the art, the technical solution of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
The following explains key terms appearing in the present invention.
The mainstream target Tracking algorithm is based on Tracking-by-detecting strategy, i.e. based on the result of target detection to track the target. The deep sort applies this strategy, and the video above is the result of deep sort tracking the population, with the number in the upper left corner of each bbox being the unique ID number that identifies someone.
The hungarian algorithm may output whether a certain object of the current frame is identical to a certain object of the previous frame.
The kalman filter can predict the position at the current time based on the position at the previous time of the target, and can estimate the position of the target more accurately than a sensor (in target tracking, i.e., a target detector such as Yolo, etc.).
Deepsort is a target tracking algorithm, which is an improved algorithm based on the sort algorithm. Since the sort algorithm is also a relatively coarse tracking algorithm, it is particularly easy to lose its own ID when an object is occluded. And the Deepsolt algorithm adds cascade matching (MatchingCascade) and confirmation (confirmations) of new tracks on the basis of the solt algorithm. The Tracks are classified into a validated state (unconfirmed) and a unconfirmed state (unconfirmed), and the newly generated Tracks are unconfirmed: the acknowledgment-free Tracks must be consecutively matched with the Detections a certain number of times (3 by default) to be able to be converted into acknowledgment. The acknowledgements must be consecutively mismatched with the detection for a certain number of times (30 times by default) before they are deleted.
YOWO, which is a behavior detection model and is named You Only Watch Once in its entirety, has the greatest characteristic of "real-time".
Yolov5 is a target detection algorithm, and in the official code, 4 versions are provided in the target detection network, namely four models of Yolov5s, yolov5m, yolov5l and Yolov5 x. YOLO v5 is mainly composed of four parts of input terminal, backup, negk and Prediction. Wherein: (1) backbond: * Convolutional neural networks that aggregate and form image features at different image fine granularity. (2) Neck: * A series of network layers that mix and combine the image features and pass the image features to the prediction layer. (3) Head: and predicting the image characteristics to generate a boundary box and a prediction category.
HSV (Value) is a color space created by a.r. Smith in 1978 based on visual properties of colors, also called a hexagonal pyramid Model. The HSV color model refers to a subset of visible light in the H, S, V three-dimensional color space that contains all the colors of a certain color gamut. Each color is represented by Hue (H), saturation (S) and color Value (V). The parameters of the color in this model are respectively: hue (H), saturation (S), brightness (V). The hue H parameter represents color information, i.e. the position of the spectral color in which it is located. The parameter is expressed by an angle, and the value range is 0-360 degrees. If one starts from red, 0 °, 120 ° for green and 240 ° for blue. Their complementary colors are: yellow is 60 °, cyan is 180 °, purple is 300 °; saturation S: the value range is 0.0-1.0; brightness V: the value range is 0.0 (black) to 1.0 (white).
The water area safety monitoring method provided by the embodiment of the invention is executed by the computer equipment, and correspondingly, the water area safety monitoring system is operated in the computer equipment.
FIG. 1 is a schematic flow chart of a method of one embodiment of the invention. The execution body of fig. 1 may be a water area safety monitoring system. The order of the steps in the flow chart may be changed and some may be omitted according to different needs.
As shown in fig. 1, the method includes:
step 110, obtaining monitoring video stream data;
step 120, identifying the person object and the position information from the monitoring video stream data by using the target detection model and the target tracking model;
step 130, intercepting a scene picture where the character object is located based on the character object and the position information, and marking a water area in the scene picture;
step 140, predicting the action track of the person object by using the behavior detection model;
and step 150, calculating the distance between the person object and the water area based on the position information of the person object, and generating an alarm event when the distance reaches a preset threshold value and the action track faces the water area.
In order to facilitate understanding of the present invention, the water area safety monitoring method provided by the present invention is further described below by using the principle of the water area safety monitoring method according to the present invention, and combining the process of monitoring the water area in the embodiment.
Specifically, referring to fig. 2, the water area safety monitoring method includes:
s1, acquiring monitoring video stream data.
The camera video stream is pulled. Judging the network environment of the video stream of the camera, detecting whether the camera is on line or not, and judging whether the video stream data is in a pullable state or not.
S2, identifying the person object and the position information from the monitoring video stream data by utilizing the target detection model and the target tracking model.
Acquiring video frames from the monitoring video stream data; detecting a person object in the video frame by using the target detection model; extracting features in the frame of the detected person object from the video frame, the features including apparent features and motion features; calculating the matching degree between the character objects of the front frame and the rear frame, and distributing an ID for the character objects after confirming that the matching degree reaches a set threshold; a coordinate system is created for the video frame and coordinates of the person object in the coordinate system are acquired.
Specifically, decoding reasoning is carried out on the video stream, meanwhile, a plurality of cameras are decoded, a batch is used for entering a reasoning model, and a yolov5 target detection model and a deepsort target tracking model are adopted to detect the behavior information of the personnel and the position information of the personnel.
S3, intercepting a scene picture where the person object is located based on the person object and the position information, and marking a water area in the scene picture.
Presetting a range parameter of a picture; taking the coordinates of the person object as the center, and intercepting a scene picture from the video frame based on the range parameter; and identifying the watery area in the scene picture, and extracting the contour line of the watery area.
Specifically, a rectangular range may be set, and a scene picture may be taken from a video frame with the coordinates of a person object as the center. The setting of the rectangular range may refer to a set distance threshold, i.e., a distance threshold of the character object from the outline of the watery area.
In one embodiment of the invention, the contour line of the water-containing area can be extracted by extracting the HSV parameter of the picture, and extracting a color block with the difference value between the HSV parameter and the HSV parameter of the standard picture within the preset threshold range as the water-containing area, wherein the standard picture selects the water surface image shot by the camera. And further extracting the edge contour line of the water area by adopting a contour extraction tool. The contour extraction tool may employ opencv software.
S4, predicting the action track of the person object by using the behavior detection model.
And cutting out scene information of the personnel according to the acquired position information of the personnel, adopting a yowa behavior detection model, and predicting the next action and position of the personnel according to the scene information and the behavior information of the personnel.
In this embodiment, the method for constructing and training the yolov5 target detection model, the deepsort target tracking model, and the yowa behavior detection model is a conventional method, and the same data set may be used.
S5, calculating the distance between the person object and the water area based on the position information of the person object, and generating an alarm event when the distance reaches a preset threshold value and the action track faces the water area.
Calculating a first shortest distance between the person object and the outline of the water area based on the position information of the person object; extracting the latest predicted position from the motion trail of the person object, and calculating a second shortest distance between the predicted position and the outline of the water area; judging whether the first shortest distance reaches a preset threshold value or not; if yes, comparing the first shortest distance with the second shortest distance, and if the first shortest distance is larger than the second shortest distance, generating an alarm event.
The method for calculating the distance comprises the following steps:
and extracting a point set from the outline of the water area, screening out the optimal point closest to the position coordinates of the person object in the point set, and outputting the shortest distance for the first time. And taking two adjacent points of the preferred points as end points, intercepting a preferred line segment from the contour line, extracting a preferred point set from the preferred line segment, screening out the preferred point closest to the position coordinates of the person object in the preferred point set, and outputting the shortest distance again, and iterating until the latest shortest distance is not smaller than the shortest distance of the last time or the iteration times reach the set times.
In addition, if the first shortest distance does not reach the preset threshold value, the second shortest distance calculation is not performed. And if the first shortest distance reaches the preset threshold and the first shortest distance is smaller than the second shortest distance, continuously tracking the corresponding person object as the important attention object.
In some embodiments, the water safety monitoring system 300 may include a plurality of functional modules comprised of computer program segments. The computer program of each program segment in the water safety monitoring system 300 can be stored in a memory of a computer device and executed by at least one processor to perform the functions of water safety monitoring (described in detail with reference to fig. 1).
In this embodiment, the water safety monitoring system 300 may be divided into a plurality of functional modules according to the functions performed by the system, as shown in fig. 3. The functional module may include: a video acquisition module 310, a target recognition module 320, a target intercept module 330, a trajectory prediction module 340, and an alert generation module 350. The module referred to in the present invention refers to a series of computer program segments capable of being executed by at least one processor and of performing a fixed function, stored in a memory. In the present embodiment, the functions of the respective modules will be described in detail in the following embodiments.
A video acquisition module 310, configured to acquire monitoring video stream data;
a target recognition module 320 for recognizing a person object and position information from the surveillance video stream data using the target detection model and the target tracking model;
the target intercepting module 330 is configured to intercept a scene picture where the person object is located based on the person object and the position information, and mark a water-containing area in the scene picture;
a trajectory prediction module 340 for predicting an action trajectory of the person object using the behavior detection model;
the alarm generating module 350 is configured to calculate a distance between the person object and the water-containing area based on the position information of the person object, and generate an alarm event when the distance reaches a preset threshold and the action track is directed toward the water-containing area.
Optionally, as an embodiment of the present invention, the target recognition module includes:
the video intercepting unit is used for acquiring video frames from the monitoring video stream data;
a person detection unit for detecting a person object in the video frame using the target detection model;
a feature extraction unit for extracting features in a frame of the detected person object from the video frame, the features including apparent features and motion features;
an ID allocation unit for calculating the feature matching degree between the character objects of the front and rear frames, and allocating an ID to the character object after confirming that the matching degree reaches a set threshold;
and the target positioning unit is used for creating a coordinate system for the video frame and acquiring the coordinates of the person object in the coordinate system.
Optionally, as an embodiment of the present invention, the target intercept module includes:
the range setting unit is used for presetting range parameters of the intercepted pictures;
the picture intercepting unit is used for intercepting a scene picture from a video frame based on the range parameter by taking the coordinates of the character object as the center;
and the region identification unit is used for identifying the watery region in the scene picture and extracting the contour line of the watery region.
Optionally, as an embodiment of the present invention, the alarm generating module includes:
a first calculation unit for calculating a first shortest distance between the person object and the outline of the water-containing area based on the position information of the person object;
a second calculation unit for extracting the latest predicted position from the motion trail of the person object, and calculating a second shortest distance between the predicted position and the outline of the water area;
the threshold judging unit is used for judging whether the first shortest distance reaches a preset threshold or not;
and the distance comparison unit is used for comparing the first shortest distance with the second shortest distance if the first shortest distance reaches a preset threshold value, and generating an alarm event if the first shortest distance is larger than the second shortest distance.
Fig. 4 is a schematic structural diagram of a terminal 400 according to an embodiment of the present invention, where the terminal 400 may be used to execute the water area safety monitoring method according to the embodiment of the present invention.
The terminal 400 may include: processor 410, memory 420, and communication module 430. The components may communicate via one or more buses, and it will be appreciated by those skilled in the art that the configuration of the server as shown in the drawings is not limiting of the invention, as it may be a bus-like structure, a star-like structure, or include more or fewer components than shown, or may be a combination of certain components or a different arrangement of components.
The memory 420 may be used to store instructions for execution by the processor 410, and the memory 420 may be implemented by any type of volatile or nonvolatile memory terminal or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, or optical disk. The execution of the instructions in memory 420, when executed by processor 410, enables terminal 400 to perform some or all of the steps in the method embodiments described below.
The processor 410 is a control center of the storage terminal, connects various parts of the entire electronic terminal using various interfaces and lines, and performs various functions of the electronic terminal and/or processes data by running or executing software programs and/or modules stored in the memory 420, and invoking data stored in the memory. The processor may be comprised of an integrated circuit (Integrated Circuit, simply referred to as an IC), for example, a single packaged IC, or may be comprised of a plurality of packaged ICs connected to the same function or different functions. For example, the processor 410 may include only a central processing unit (Central Processing Unit, simply CPU). In the embodiment of the invention, the CPU can be a single operation core or can comprise multiple operation cores.
And a communication module 430, configured to establish a communication channel, so that the storage terminal can communicate with other terminals. Receiving user data sent by other terminals or sending the user data to other terminals.
The present invention also provides a computer storage medium in which a program may be stored, which program may include some or all of the steps in the embodiments provided by the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access memory, RAM), or the like.
Therefore, the invention carries out character recognition, distance water area judgment and behavior track judgment by combining the target detection technology, the target tracking technology and the behavior detection technology, and the comprehensive multiple factors raise the alarm, thereby improving the alarm accuracy.
It will be apparent to those skilled in the art that the techniques of embodiments of the present invention may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solution in the embodiments of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium such as a U-disc, a mobile hard disc, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, etc. various media capable of storing program codes, including several instructions for causing a computer terminal (which may be a personal computer, a server, or a second terminal, a network terminal, etc.) to execute all or part of the steps of the method described in the embodiments of the present invention.
The same or similar parts between the various embodiments in this specification are referred to each other. In particular, for the terminal embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference should be made to the description in the method embodiment for relevant points.
In the several embodiments provided by the present invention, it should be understood that the disclosed systems and methods may be implemented in other ways. For example, the system embodiments described above are merely illustrative, e.g., the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with respect to each other may be through some interface, indirect coupling or communication connection of systems or modules, electrical, mechanical, or other form.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
Although the present invention has been described in detail by way of preferred embodiments with reference to the accompanying drawings, the present invention is not limited thereto. Various equivalent modifications and substitutions may be made in the embodiments of the present invention by those skilled in the art without departing from the spirit and scope of the present invention, and it is intended that all such modifications and substitutions be within the scope of the present invention/be within the scope of the present invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for monitoring the safety of a water area, comprising:
acquiring monitoring video stream data;
identifying a person object and position information from the surveillance video stream data using the target detection model and the target tracking model;
intercepting a scene picture where the character object is located based on the character object and the position information, and marking a water-containing area in the scene picture;
predicting the action track of the person object by using the behavior detection model;
and calculating the distance between the person object and the water area based on the position information of the person object, and generating an alarm event when the distance reaches a preset threshold value and the action track faces the water area.
2. The method of claim 1, wherein identifying person objects and location information from surveillance video stream data using the object detection model and the object tracking model comprises:
acquiring video frames from the monitoring video stream data;
detecting a person object in the video frame by using the target detection model;
extracting features in the frame of the detected person object from the video frame, the features including apparent features and motion features;
calculating the matching degree between the character objects of the front frame and the rear frame, and distributing an ID for the character objects after confirming that the matching degree reaches a set threshold;
a coordinate system is created for the video frame and coordinates of the person object in the coordinate system are acquired.
3. The method of claim 2, wherein capturing a scene picture in which the person object is located based on the person object and the location information, and marking the water-bearing region in the scene picture, comprises:
presetting a range parameter of a picture;
taking the coordinates of the person object as the center, and intercepting a scene picture from the video frame based on the range parameter;
and identifying the watery area in the scene picture, and extracting the contour line of the watery area.
4. A method according to claim 1 or 3, wherein calculating the distance of the person object from the water-bearing area based on the position information of the person object and generating an alarm event when the distance reaches a preset threshold and the action trajectory is directed toward the water-bearing area comprises:
calculating a first shortest distance between the person object and the outline of the water area based on the position information of the person object;
extracting the latest predicted position from the motion trail of the person object, and calculating a second shortest distance between the predicted position and the outline of the water area;
judging whether the first shortest distance reaches a preset threshold value or not:
if yes, comparing the first shortest distance with the second shortest distance, and if the first shortest distance is larger than the second shortest distance, generating an alarm event.
5. A water safety monitoring system, comprising:
the video acquisition module is used for acquiring monitoring video stream data;
the target recognition module is used for recognizing the person object and the position information from the monitoring video stream data by utilizing the target detection model and the target tracking model;
the target intercepting module is used for intercepting a scene picture where the character object is located based on the character object and the position information and marking a water area in the scene picture;
the track prediction module is used for predicting the action track of the person object by utilizing the behavior detection model;
and the alarm generation module is used for calculating the distance between the person object and the water area based on the position information of the person object, and generating an alarm event when the distance reaches a preset threshold value and the action track faces the water area.
6. The system of claim 5, wherein the object recognition module comprises:
the video intercepting unit is used for acquiring video frames from the monitoring video stream data;
a person detection unit for detecting a person object in the video frame using the target detection model;
a feature extraction unit for extracting features in a frame of the detected person object from the video frame, the features including apparent features and motion features;
an ID allocation unit for calculating the feature matching degree between the character objects of the front and rear frames, and allocating an ID to the character object after confirming that the matching degree reaches a set threshold;
and the target positioning unit is used for creating a coordinate system for the video frame and acquiring the coordinates of the person object in the coordinate system.
7. The system of claim 6, wherein the target intercept module comprises:
the range setting unit is used for presetting range parameters of the intercepted pictures;
the picture intercepting unit is used for intercepting a scene picture from a video frame based on the range parameter by taking the coordinates of the character object as the center;
and the region identification unit is used for identifying the watery region in the scene picture and extracting the contour line of the watery region.
8. The system of claim 5 or 7, wherein the alert generation module comprises:
a first calculation unit for calculating a first shortest distance between the person object and the outline of the water-containing area based on the position information of the person object;
a second calculation unit for extracting the latest predicted position from the motion trail of the person object, and calculating a second shortest distance between the predicted position and the outline of the water area;
the threshold judging unit is used for judging whether the first shortest distance reaches a preset threshold or not;
and the distance comparison unit is used for comparing the first shortest distance with the second shortest distance if the first shortest distance reaches a preset threshold value, and generating an alarm event if the first shortest distance is larger than the second shortest distance.
9. A terminal, comprising:
the storage is used for storing a water area safety monitoring program;
a processor for implementing the steps of the water safety monitoring method according to any one of claims 1-4 when executing the water safety monitoring program.
10. A computer readable storage medium storing a computer program, characterized in that the readable storage medium has stored thereon a water safety monitoring program which, when executed by a processor, implements the steps of the water safety monitoring method according to any of claims 1-4.
CN202311034804.9A 2023-08-17 2023-08-17 Water area safety monitoring method, system, terminal and storage medium Pending CN116778673A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311034804.9A CN116778673A (en) 2023-08-17 2023-08-17 Water area safety monitoring method, system, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311034804.9A CN116778673A (en) 2023-08-17 2023-08-17 Water area safety monitoring method, system, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN116778673A true CN116778673A (en) 2023-09-19

Family

ID=88013690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311034804.9A Pending CN116778673A (en) 2023-08-17 2023-08-17 Water area safety monitoring method, system, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN116778673A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117612409A (en) * 2024-01-23 2024-02-27 广州市勤思网络科技有限公司 Method and device for pre-warning of over-region of aquatic object

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101709751B1 (en) * 2015-11-24 2017-02-24 주식회사 지오멕스소프트 An automatic monitoring system for dangerous situation of persons in the sea
CN112287928A (en) * 2020-10-20 2021-01-29 深圳市慧鲤科技有限公司 Prompting method and device, electronic equipment and storage medium
CN114463941A (en) * 2021-12-30 2022-05-10 中国电信股份有限公司 Drowning prevention alarm method, device and system
CN115311820A (en) * 2022-07-11 2022-11-08 西安电子科技大学广州研究院 Intelligent security system near water
CN115798146A (en) * 2022-11-14 2023-03-14 中国平安人寿保险股份有限公司 Drowning prevention early warning method and device, computer equipment and storage medium
CN115938069A (en) * 2022-12-19 2023-04-07 北京百度网讯科技有限公司 Dangerous behavior alarm method and dangerous behavior alarm system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101709751B1 (en) * 2015-11-24 2017-02-24 주식회사 지오멕스소프트 An automatic monitoring system for dangerous situation of persons in the sea
CN112287928A (en) * 2020-10-20 2021-01-29 深圳市慧鲤科技有限公司 Prompting method and device, electronic equipment and storage medium
CN114463941A (en) * 2021-12-30 2022-05-10 中国电信股份有限公司 Drowning prevention alarm method, device and system
CN115311820A (en) * 2022-07-11 2022-11-08 西安电子科技大学广州研究院 Intelligent security system near water
CN115798146A (en) * 2022-11-14 2023-03-14 中国平安人寿保险股份有限公司 Drowning prevention early warning method and device, computer equipment and storage medium
CN115938069A (en) * 2022-12-19 2023-04-07 北京百度网讯科技有限公司 Dangerous behavior alarm method and dangerous behavior alarm system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117612409A (en) * 2024-01-23 2024-02-27 广州市勤思网络科技有限公司 Method and device for pre-warning of over-region of aquatic object
CN117612409B (en) * 2024-01-23 2024-04-05 广州市勤思网络科技有限公司 Method and device for pre-warning of over-region of aquatic object

Similar Documents

Publication Publication Date Title
US9652863B2 (en) Multi-mode video event indexing
CN108053427B (en) Improved multi-target tracking method, system and device based on KCF and Kalman
CN108062349B (en) Video monitoring method and system based on video structured data and deep learning
CN109670441A (en) A kind of realization safety cap wearing knows method for distinguishing, system, terminal and computer readable storage medium
WO2021051601A1 (en) Method and system for selecting detection box using mask r-cnn, and electronic device and storage medium
KR102122859B1 (en) Method for tracking multi target in traffic image-monitoring-system
CN108694399B (en) License plate recognition method, device and system
CN113052107B (en) Method for detecting wearing condition of safety helmet, computer equipment and storage medium
CN112396658A (en) Indoor personnel positioning method and positioning system based on video
CN113012383B (en) Fire detection alarm method, related system, related equipment and storage medium
CN116778673A (en) Water area safety monitoring method, system, terminal and storage medium
CN114782897A (en) Dangerous behavior detection method and system based on machine vision and deep learning
CN114359373B (en) Swimming pool drowning prevention target behavior identification method and device, computer equipment and storage medium
CN112861826B (en) Coal mine supervision method, system, equipment and storage medium based on video image
CN111797726A (en) Flame detection method and device, electronic equipment and storage medium
CN110956156A (en) Deep learning-based red light running detection system
CN115410153A (en) Door opening and closing state judging method and device, electronic terminal and storage medium
CN115909400A (en) Identification method for using mobile phone behaviors in low-resolution monitoring scene
CN115731477A (en) Image recognition method, illicit detection method, terminal device, and storage medium
CN112861711A (en) Regional intrusion detection method and device, electronic equipment and storage medium
CN113420631A (en) Safety alarm method and device based on image recognition
CN116740577B (en) Road ponding identification method, system, terminal and storage medium
US20230316760A1 (en) Methods and apparatuses for early warning of climbing behaviors, electronic devices and storage media
US20230360402A1 (en) Video-based public safety incident prediction system and method therefor
CN115457450A (en) Night video monitoring method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230919