CN116973939A - Safety monitoring method and device - Google Patents

Safety monitoring method and device Download PDF

Info

Publication number
CN116973939A
CN116973939A CN202311236395.0A CN202311236395A CN116973939A CN 116973939 A CN116973939 A CN 116973939A CN 202311236395 A CN202311236395 A CN 202311236395A CN 116973939 A CN116973939 A CN 116973939A
Authority
CN
China
Prior art keywords
point cloud
dimensional
target
dangerous area
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311236395.0A
Other languages
Chinese (zh)
Other versions
CN116973939B (en
Inventor
刘权
赵朝阳
王金桥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Objecteye Beijing Technology Co Ltd
Original Assignee
Objecteye Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Objecteye Beijing Technology Co Ltd filed Critical Objecteye Beijing Technology Co Ltd
Priority to CN202311236395.0A priority Critical patent/CN116973939B/en
Publication of CN116973939A publication Critical patent/CN116973939A/en
Application granted granted Critical
Publication of CN116973939B publication Critical patent/CN116973939B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/22Status alarms responsive to presence or absence of persons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Probability & Statistics with Applications (AREA)
  • Alarm Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a safety monitoring method and a safety monitoring device, which relate to the technical field of electric data processing and machine vision, wherein the safety monitoring method comprises the following steps: acquiring a two-dimensional image and three-dimensional point cloud data of a target monitoring scene; the two-dimensional image and the three-dimensional point cloud data are subjected to joint positioning to obtain personnel position information and each dangerous area information in the target monitoring scene; and determining whether monitoring results of the personnel in the dangerous area exist according to the personnel position information and the dangerous area information. The invention can not only reduce the intensity of manual monitoring, realize automatic high-intensity value safety monitoring, reduce the monitoring cost; the detection accuracy of dangerous events and potential dangerous behaviors can be improved, the accident risk and loss are greatly reduced, and the production or engineering operation is safe and orderly.

Description

Safety monitoring method and device
Technical Field
The invention relates to the technical field of electric data processing and machine vision, in particular to a safety monitoring method and device.
Background
The construction and foundation construction industry is an economic pillar industry, is a traditional industry which is more staff, labor-intensive and mainly used for outdoor operation, has a close engineering construction safety relationship with the life and property safety of people, and is also related to sustainable development of economy and a stable society. Therefore, engineering operation safety problems are generally valued, and how to perform effective safety supervision is an important problem currently faced by the industry.
Currently, there are two main types of security supervision technologies: one is manual inspection, namely manual irregular inspection on site; the other is video monitoring, which is divided into manual monitoring and automatic machine monitoring.
However, the manual inspection mode is low in efficiency, cannot cover the whole period of construction operation, and is easy to monitor and leak; the manual monitoring mode requires manual high-intensity monitoring of video pictures, so that the labor cost is huge, and monitoring staff is easy to fatigue, so that monitoring is missed; the automatic machine monitoring mode is generally based on Two-Dimensional (2D) video automatic monitoring, and cannot acquire depth position information of equipment and people, false alarm is easy to occur, accuracy is low, and practical application requirements cannot be met. Therefore, an effective solution is needed to solve the above-mentioned problems.
Disclosure of Invention
Aiming at the problems existing in the prior art, the embodiment of the invention provides a safety monitoring method and device.
The invention provides a safety monitoring method, which comprises the following steps:
acquiring a two-dimensional image and three-dimensional point cloud data of a target monitoring scene;
the two-dimensional image and the three-dimensional point cloud data are subjected to joint positioning to obtain personnel position information and each dangerous area information in the target monitoring scene;
And determining whether monitoring results of the personnel in the dangerous area exist according to the personnel position information and the dangerous area information.
According to the safety monitoring method provided by the invention, the two-dimensional image and the three-dimensional point cloud data are jointly positioned to obtain the personnel position information and the dangerous area information in the target monitoring scene, and the safety monitoring method comprises the following steps:
performing target detection on the two-dimensional image, and determining each target object in the two-dimensional image and a detection frame corresponding to each target object, wherein the target objects comprise personnel and engineering facility products;
matching the three-dimensional point cloud data to determine point clouds corresponding to the detection frames;
and carrying out personnel positioning on the point clouds corresponding to the personnel, determining personnel position information in the target monitoring scene, carrying out dangerous area positioning on the point clouds of each engineering facility, and determining information of each dangerous area in the target monitoring scene.
According to the security monitoring method provided by the invention, the matching processing of the three-dimensional point cloud data comprises the following steps:
taking the ground plane as a reference plane, and performing ground plane fitting processing on the three-dimensional point cloud data;
And carrying out space conversion processing on the three-dimensional point cloud data, and determining point clouds corresponding to each detection frame.
According to the safety monitoring method provided by the invention, the two-dimensional image is acquired based on a camera, and the three-dimensional point cloud data is acquired based on a radar;
correspondingly, the performing spatial conversion processing on the three-dimensional point cloud data to determine a point cloud set corresponding to each detection frame includes:
projecting the three-dimensional point cloud data to the two-dimensional image according to the transformation matrix corresponding to the radar and the camera, and determining the position of each point cloud in the three-dimensional point cloud data in the two-dimensional image;
and determining each point cloud of the position in the detection frame as a point cloud set corresponding to the detection frame aiming at each detection frame in the two-dimensional image.
According to the safety monitoring method provided by the invention, the engineering facility comprises a mobile facility and a stationary facility;
and before the personnel positioning is performed on the point cloud set corresponding to the personnel and the personnel position information in the target monitoring scene is determined, the method further comprises the following steps:
clustering each point cloud in the point cloud set in depth aiming at each point cloud set to obtain a plurality of point cloud clusters corresponding to the point cloud sets;
Correspondingly, the step of locating the person on the point cloud set corresponding to the person, and the step of determining the person position information in the target monitoring scene includes:
the point cloud cluster closest to the camera and/or the radar in the point cloud clusters corresponding to the person is used as a first target point cloud cluster;
determining the position information of the clustering center of the first target point cloud cluster as personnel position information;
correspondingly, the dangerous area positioning of the point cloud set of each engineering facility product, and the determining of each dangerous area information in the target monitoring scene comprises the following steps:
for each mobile facility, taking the point cloud cluster closest to the camera and/or the radar in the point cloud clusters corresponding to the mobile facility as a second target point cloud cluster; determining the width of the mobile facility according to the second target point cloud cluster; determining dangerous area information corresponding to the mobile facility in the target monitoring scene according to the width;
for each stationary facility, each point cloud cluster corresponding to the stationary facility includes the point cloud cluster with the largest point cloud as a third target point cloud cluster; determining the working radius and the working height of the stationary facility according to the third target point cloud cluster; and determining dangerous area information corresponding to the fixed facility in the target monitoring scene according to the working radius and the working height.
According to the safety monitoring method provided by the invention, the two-dimensional image is acquired based on a camera, and the three-dimensional point cloud data is acquired based on a radar;
correspondingly, before the two-dimensional image and the three-dimensional point cloud data of the target monitoring scene are acquired, the method further comprises:
dividing the field of view of the camera into a nine-square grid;
collecting a plurality of images of different poses of a standard black-and-white checkerboard in the nine-grid;
calibrating the camera by using a Zhang calibration method to obtain an internal reference of the camera;
obtaining a calibration image and calibration point cloud data of a calibration scene;
acquiring two-dimensional coordinates of each calibration point in the calibration scene from the calibration image, and acquiring three-dimensional coordinates of each calibration point from the calibration point cloud data;
and determining transformation matrixes corresponding to the radar and the camera according to the internal parameters of the camera, the two-dimensional coordinates and the three-dimensional coordinates of each calibration point, wherein the transformation matrixes are used for determining personnel position information and each dangerous area information in the target monitoring scene.
According to the safety monitoring method provided by the invention, after determining whether the monitoring result of the dangerous area exists for the personnel according to the personnel position information and the dangerous area information, the safety monitoring method further comprises the following steps:
And storing the monitoring result, and sending out a dangerous alarm and uploading an alarm report to the cloud when the monitoring result indicates that the personnel are in a dangerous area.
The invention also provides a safety monitoring device, comprising:
the acquisition module is configured to acquire two-dimensional images and three-dimensional point cloud data of the target monitoring scene;
the joint processing module is configured to perform joint positioning on the two-dimensional image and the three-dimensional point cloud data to obtain personnel position information and each dangerous area information in the target monitoring scene;
and the determining module is configured to determine whether a monitoring result of the person in the dangerous area exists according to the person position information and the dangerous area information.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the security monitoring method as described in any of the above when executing the program.
The invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a security monitoring method as described in any of the above.
The invention also provides a computer program product comprising a computer program which when executed by a processor implements a security monitoring method as described in any of the above.
According to the safety monitoring method and device, two-dimensional images and three-dimensional point cloud data of a target monitoring scene are obtained; the two-dimensional image and the three-dimensional point cloud data are subjected to joint positioning to obtain personnel position information and each dangerous area information in the target monitoring scene; and determining whether monitoring results of the personnel in the dangerous area exist according to the personnel position information and the dangerous area information. The invention can reduce the intensity of manual monitoring, does not need manual 24-hour high-intensity guard monitoring or inspection, realizes automatic high-intensity value safety monitoring, only needs alarming when the monitoring result is abnormal, and reduces the monitoring cost; the detection accuracy rate of dangerous events and potential dangerous behaviors can be improved, the detection rate of dangerous events and potential dangerous behaviors can be more than 90%, accident risks and losses are greatly reduced, and production or engineering operation is safe and orderly.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a security detection method according to the present invention;
FIG. 2 is a schematic flow chart of a plane fitting provided by the present invention;
FIG. 3 is a schematic flow chart of calibration provided by the present invention;
FIG. 4 is a physical diagram of the security detection device provided by the present invention;
FIG. 5 is a second flow chart of the security detection method according to the present invention;
FIG. 6 is a schematic diagram of a safety monitoring device according to the present invention;
fig. 7 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The security monitoring method and apparatus of the present invention are described below in conjunction with fig. 1-6.
Fig. 1 is a schematic flow chart of a security monitoring method provided by the present invention, and referring to fig. 1, the method includes steps 101 to 103, where:
Step 101: and acquiring a two-dimensional image and three-dimensional point cloud data of the target monitoring scene.
Firstly, it should be noted that the execution body of the present invention may be any electronic device for security monitoring, for example, any one of a smart phone, a smart watch, a desktop computer, a laptop computer, and the like.
Specifically, the target monitoring scene refers to a scene needing to be monitored safely, and can be an engineering operation construction scene, such as a house construction scene and a river-crossing bridge construction scene. The two-dimensional image may be a 2D color image. Three-dimensional point cloud data is 3D (Three-dimensional) point cloud data.
In practical application, current two-dimensional images and three-dimensional point cloud data of a target monitoring scene are obtained in real time. There are various methods for acquiring the two-dimensional image and the three-dimensional point cloud data, and the present invention is not limited in this regard.
Illustratively, the two-dimensional image may be acquired by a camera, such as a 2D camera; the three-dimensional point cloud data may be acquired by radar, such as by lidar. Namely, a 2D color image of a frame of monitoring scene is obtained through a high-definition camera, and meanwhile, 3D point cloud data of the frame of monitoring scene is obtained through a laser radar.
Illustratively, the user uploads the two-dimensional image and the three-dimensional point cloud data of the target monitoring scene, and accordingly, the execution subject acquires the two-dimensional image and the three-dimensional point cloud data of the target monitoring scene.
Illustratively, the executing body receives the data acquisition instruction or the security monitoring instruction, and accordingly, the executing body acquires the two-dimensional image and the three-dimensional point cloud data of the target monitoring scene from the storage area to which the data acquisition instruction or the security monitoring instruction points.
Step 102: and carrying out joint positioning on the two-dimensional image and the three-dimensional point cloud data to obtain personnel position information and each dangerous area information in the target monitoring scene.
Specifically, the personnel position information characterizes the positions of the workers, the patrolling personnel, the supervision personnel and other personnel in the target monitoring scene, and refers to the three-dimensional positions in the space. The dangerous area information characterizes the position of a dangerous area with potential safety hazards in a target monitoring scene, wherein the dangerous area can be an area within the working radius of the excavator, an area within the working radius of the crane, an area under a suspended object of the crane and the like, and refers to a three-dimensional position in space.
In practical application, the two-dimensional image and the three-dimensional point cloud data can be processed in a combined mode, personnel and dangerous areas in the target monitoring scene are determined, and personnel position information of each personnel and dangerous area information of each dangerous area are determined based on the two-dimensional image and the three-dimensional point cloud data.
For example, the two-dimensional image and the three-dimensional point cloud data can be input into a multi-modal data analysis model for joint processing, and the personnel position information and the danger area information in the target monitoring scene are determined.
Step 103: and determining whether monitoring results of the personnel in the dangerous area exist according to the personnel position information and the dangerous area information.
On the basis of acquiring the personnel position information and the dangerous area information, determining a detection result according to the personnel position information and the dangerous area information, wherein the detection result represents whether personnel are in the dangerous area or not.
In practical application, based on the personnel position information and each dangerous area information, whether the position represented by the personnel position information is located in the position represented by the dangerous area information can be judged: if the monitoring result is that the person is in the dangerous area, namely the monitoring result is that the person is in the dangerous area, at the moment, a warning can be sent out timely based on the monitoring result, so that the person with the dangerous action can adjust the action timely and is located in the safe area; if not, determining that no personnel are in the dangerous area, namely, the monitoring result is that no dangerous behavior of the personnel in the dangerous area exists, and then continuing to perform continuous safety monitoring on the target monitoring scene.
Among them, dangerous behavior includes, but is not limited to: the working radius station person of the excavator, the working radius station person of the crane, the crane object lifting person, the operator hand object lifting person, the object lifting person, and the like.
For example, the multi-mode data analysis model may evaluate and calculate the personnel position information and each dangerous area information in the target monitoring scene, to determine whether there is a monitoring result that the personnel is in the dangerous area.
According to the safety monitoring method provided by the invention, the two-dimensional image and the three-dimensional point cloud data of the target monitoring scene are obtained; the two-dimensional image and the three-dimensional point cloud data are subjected to joint positioning to obtain personnel position information and each dangerous area information in the target monitoring scene; and determining whether monitoring results of the personnel in the dangerous area exist according to the personnel position information and the dangerous area information. The invention can reduce the intensity of manual monitoring, does not need manual 24-hour high-intensity guard monitoring or inspection, realizes automatic high-intensity value safety monitoring, only needs alarming when the monitoring result is abnormal, and reduces the monitoring cost; the detection accuracy rate of dangerous events and potential dangerous behaviors can be improved, the detection rate of dangerous events and potential dangerous behaviors can be more than 90%, accident risks and losses are greatly reduced, and production or engineering operation is safe and orderly.
In one or more optional embodiments of the present invention, the joint positioning of the two-dimensional image and the three-dimensional point cloud data may obtain personnel location information and each dangerous area information in the target monitoring scene, and the specific implementation process may be as follows:
performing target detection on the two-dimensional image, and determining each target object in the two-dimensional image and a detection frame corresponding to each target object, wherein the target objects comprise personnel and engineering facility products;
matching the three-dimensional point cloud data to determine point clouds corresponding to the detection frames;
and carrying out personnel positioning on the point clouds corresponding to the personnel, determining personnel position information in the target monitoring scene, carrying out dangerous area positioning on the point clouds of each engineering facility, and determining information of each dangerous area in the target monitoring scene.
Specifically, the detection frame is a rectangular frame for pointing each object in the two-dimensional image, and carries the position information of the detection frame. The target is also the detected target. The engineering facility is the equipment and suspended matter used in construction. Wherein the machine equipment comprises a crane, an excavator, a bulldozer and the like. A point cloud is a collection of pointing clouds.
In practical application, the two-dimensional image can be input into the target detection unit to perform target detection, each target object in the two-dimensional image is identified, and each target object is marked in the two-dimensional image by utilizing the detection frame.
Illustratively, the detection frame of the target object on the 2D color image and the detection frame position information are obtained by first inputting the 2D color image to a target detection model based on deep learning to detect the target object. The target detection model is obtained by training based on a training data set corresponding to a target object, wherein the training data set consists of a private data set and/or a network data set, the private data set refers to an image containing the target object, which is acquired by the method, and the network data set refers to an image containing the target object, which is acquired from a network.
Further, each point cloud in the 3D point cloud data is matched and mapped with each detection frame in the 2D image, the detection frame corresponding to each point cloud in the 3D point cloud data is determined, and each point cloud corresponding to each detection frame forms a point cloud set corresponding to the detection frame.
For a detection frame of a labeling person, determining the three-dimensional position of the person in space according to a point cloud set corresponding to the detection frame, namely determining the position information of the person in a target monitoring scene; and for a detection frame marked with the engineering facility, determining a dangerous area corresponding to the engineering facility and the three-dimensional position of the dangerous area in space according to the point cloud set corresponding to the detection frame, namely determining the dangerous area information in the target monitoring scene.
Therefore, the processing such as target detection, ground plane fitting and positioning can be improved, the accuracy and the reliability of determining the position information of the personnel and the information of each dangerous area can be improved, the reliability of safety detection is further improved, and the safety of the personnel is improved.
In one or more optional embodiments of the present invention, the matching processing of the three-dimensional point cloud data may be implemented as follows:
taking the ground plane as a reference plane, and performing ground plane fitting processing on the three-dimensional point cloud data;
and carrying out space conversion processing on the three-dimensional point cloud data, and determining point clouds corresponding to each detection frame.
In practical application, the ground plane is used as a space reference, the ground plane fitting is carried out on the 3D point cloud data, the fitted 3D point cloud data is mapped to the 2D image, the detection frames corresponding to the point clouds in the 3D point cloud data are determined, and the point clouds corresponding to each detection frame form the point cloud set corresponding to the detection frame. Therefore, the space conversion is carried out after the ground plane fitting, so that the determination of the point cloud set is realized, and the accuracy of the point cloud set can be improved.
In one or more optional embodiments of the invention, the two-dimensional image is acquired based on a camera, and the three-dimensional point cloud data is acquired based on radar; correspondingly, the spatial conversion processing is performed on the three-dimensional point cloud data to determine the point cloud set corresponding to each detection frame, and the specific implementation process may be as follows:
Projecting the three-dimensional point cloud data to the two-dimensional image according to the transformation matrix corresponding to the radar and the camera, and determining the position of each point cloud in the three-dimensional point cloud data in the two-dimensional image;
and determining each point cloud of the position in the detection frame as a point cloud set corresponding to the detection frame aiming at each detection frame in the two-dimensional image.
In practical application, the camera and the radar are jointly calibrated in advance, and a transformation matrix corresponding to the radar and the camera, namely a transformation (relation) matrix from a radar coordinate system to a camera coordinate system, is obtained. Therefore, after the 3D point cloud data is subjected to ground plane fitting, each point cloud in the 3D point cloud data can be projected to a 2D image one by one according to a transformation matrix corresponding to a camera and a radar, and the point clouds falling into the detection frames are bound to corresponding targets to obtain point clouds aligned with each target or point clouds corresponding to each detection frame. In this way, the point cloud set corresponding to each target object can be quickly and accurately determined.
In particular, camera intrinsic M can be utilized cam Transformation matrix M corresponding to radar Rt For 3D point cloud data P according to formula (1) pcl Point clouds P (x, y, z) e P in (c) pcl One by one to 2D image I cam Projecting to obtain a projection point p' (u, v) epsilon I pcl For the projection point p' falling into the detection frame R (x, y, w, h) ∈R rect The point cloud p in the object is bound to the corresponding object to obtain the objectPoint clouds with object alignment. Wherein f u For the equivalent pixel focal length in the camera lateral direction, f v For the equivalent pixel focal length in the longitudinal direction of the camera, u 0 Is the center coordinate of the image transverse direction, v 0 Is the center coordinate of the image in the longitudinal direction; r is a rotation matrix, and t is a translation matrix.
(1)
In one or more optional embodiments of the present invention, the matching processing of the three-dimensional point cloud data to determine a point cloud set corresponding to each detection frame includes:
performing downsampling processing on the three-dimensional point cloud data to obtain the processed three-dimensional point cloud data;
performing point cloud normal estimation on the processed three-dimensional point cloud data, and determining the curvature of each point cloud in the processed three-dimensional point cloud data;
taking the point clouds with the minimum curvature as seeds to perform growth clustering to obtain point cloud blocks with smooth curvature;
and carrying out matching processing on the point cloud blocks to determine point cloud sets corresponding to the detection frames.
In practical application, refer to the schematic flow chart of plane fitting provided by the invention shown in fig. 2: firstly, carrying out point cloud downsampling on input 3D point cloud data, reducing the scale of the point cloud while guaranteeing the overall geometric topological characteristic of the point cloud, and reducing the calculation amount of subsequent processing. And then, carrying out point cloud normal estimation on the three-dimensional point cloud data after downsampling, and preparing a universal description operator for subsequent processing. And then taking a plurality of points with the minimum curvature as seeds for growth clustering, namely carrying out point cloud clustering segmentation to obtain point cloud blocks with smooth curvature, and finally adopting a random sampling consistency Ranac iterative algorithm for carrying out plane fitting (Ranac fitting plane) and space conversion processing to determine point cloud sets corresponding to all detection frames.
In one or more optional embodiments of the present invention, before the step of locating the person in the point cloud set corresponding to the person and determining the person position information in the target monitoring scene, the method further includes:
and clustering each point cloud in the point cloud set in depth aiming at each point cloud set to obtain a plurality of point cloud clusters corresponding to the point cloud sets.
In practical application, the point cloud set corresponding to each target object is preprocessed, namely, the point cloud in the point cloud set corresponding to the target object is clustered on the depth coordinate, so that a plurality of point cloud clusters are obtained. Therefore, discrete interference points can be filtered, and the positioning accuracy is improved.
In one or more optional embodiments of the present invention, the step of locating the person in the point cloud set corresponding to the person, and determining the person position information in the target monitoring scene may be implemented as follows:
the point cloud cluster closest to the camera and/or the radar in the point cloud clusters corresponding to the person is used as a first target point cloud cluster;
and determining the position information of the clustering center of the first target point cloud cluster as personnel position information.
In practical application, for the point cloud set of each target object, different processing strategies can be adopted according to different types of the target object so as to judge the personnel position and the dangerous area, namely, the personnel position information and the dangerous area information are determined.
For determining the position information of the personnel, namely, for the personnel, as the personnel is a mobile target, according to the forming principle of the point cloud, taking the point cloud cluster, which is close to a camera and/or a radar, of the point cloud clusters corresponding to the personnel, as the point cloud cluster actually corresponding to the personnel, namely, the first target point cloud cluster. Further, the clustering center of the first target point cloud cluster is taken as the actual three-dimensional position of the person, namely, the position information of the clustering center of the first target point cloud cluster is determined as the position information of the person. Therefore, the accuracy of the personnel position information can be improved, and the high efficiency of safety monitoring is further improved.
In one or more alternative embodiments of the invention, the engineering utilities include mobile utilities and stationary utilities; correspondingly, the dangerous area positioning is performed on the point cloud set of each engineering facility, and each dangerous area information in the target monitoring scene is determined, and the specific implementation process can be as follows:
for each mobile facility, taking the point cloud cluster closest to the camera and/or the radar in the point cloud clusters corresponding to the mobile facility as a second target point cloud cluster; determining the width of the mobile facility according to the second target point cloud cluster; determining dangerous area information corresponding to the mobile facility in the target monitoring scene according to the width;
For each stationary facility, each point cloud cluster corresponding to the stationary facility includes the point cloud cluster with the largest point cloud as a third target point cloud cluster; determining the working radius and the working height of the stationary facility according to the third target point cloud cluster; and determining dangerous area information corresponding to the fixed facility in the target monitoring scene according to the working radius and the working height.
Specifically, the mobile facility means an engineering facility whose object type is a mobile object, such as a crane. The stationary type of the utility means an engineering utility whose object type is a stationary type, such as a crane, a hoist, an excavator, and the like.
In practical application, according to the principle of forming point cloud, the point cloud cluster, which is close to a camera and/or a radar, in the point cloud clusters corresponding to the mobile facility is taken as the point cloud cluster actually corresponding to the mobile facility, namely the second target point cloud cluster. Further, the width of the mobile facility is obtained according to the second target point cloud cluster, for example, the second target point cloud cluster is projected to the ground, and the length of the maximum diagonal of the projection is determined as the width. Then, the cylinder space with the first width as the diameter and in the vertical direction of the ground is judged as a dangerous area, namely, the dangerous area information corresponding to the mobile facility is determined.
And for the stationary facility, taking the point cloud cluster with the largest number of point clouds among the point cloud clusters corresponding to the stationary facility as the point cloud cluster actually corresponding to the stationary facility, namely the third target point cloud cluster. Further, projecting the first target point cloud cluster to the ground, and calculating the working radius of the stationary type facility, wherein the working radius of the stationary type facility is determined as the radius of the minimum circumcircle of the first target point cloud cluster projected on the ground; and determining the highest height, namely the height, of the stationary facility according to the height information of each point cloud of the cloud cluster of the first target point. Then, the cylindrical space is defined by the working radius and the height, and the dangerous area is determined, that is, the dangerous area information corresponding to the unshaped utility is determined.
Therefore, different treatment strategies are adopted for engineering facility products of different target types so as to judge dangerous areas, the accuracy of dangerous area information can be improved, and the high efficiency of safety monitoring is further improved.
It should be noted that, the designated target point cloud cluster (at least one of the first target point cloud cluster to the third target point cloud cluster) may be obtained by:
The three-dimensional plane is represented by formula (2), where A, B, C and D are constant coefficients:
(2)
any point in space (x 0 ,y 0 ,z 0 ) The projection coordinates in the three-dimensional plane are (x, y, z), and the vector formed by the two points is parallel to the normal vector (A, B, C), so that the formula (3) is obtained.
(3)
And (3) deforming the formula (3) to obtain a formula (4).
(4)
Substituting the formula (4) into the formula (1) to obtain the formula (5).
(5)
And (5) carrying the formula (5) back to the formula (4) to obtain the projected coordinates.
In one or more optional embodiments of the invention, the two-dimensional image is acquired based on a camera, and the three-dimensional point cloud data is acquired based on radar; correspondingly, before the two-dimensional image and the three-dimensional point cloud data of the target monitoring scene are acquired, calibration is also needed. The specific process of calibration can be as follows:
dividing the field of view of the camera into a nine-square grid;
collecting a plurality of images of different poses of a standard black-and-white checkerboard in the nine-grid;
calibrating the camera by using a Zhang calibration method to obtain an internal reference of the camera;
obtaining a calibration image and calibration point cloud data of a calibration scene;
acquiring two-dimensional coordinates of each calibration point in the calibration scene from the calibration image, and acquiring three-dimensional coordinates of each calibration point from the calibration point cloud data;
And determining transformation matrixes corresponding to the radar and the camera according to the internal parameters of the camera, the two-dimensional coordinates and the three-dimensional coordinates of each calibration point, wherein the transformation matrixes are used for determining personnel position information and each dangerous area information in the target monitoring scene.
In practical application, referring to fig. 3, fig. 3 is a schematic flow chart of calibration provided by the present invention: before safety monitoring, permanent calibration is needed, including internal reference calibration of a camera and joint calibration of a camera radar.
Firstly, calibrating a camera to obtain an internal reference: in order to improve the accuracy of internal parameter calibration, standard black-and-white checkerboards are prepared, the camera field of view is divided into nine squares, and a plurality of images of different positions of the black-and-white checkerboards in the nine squares of the camera field of view are respectively acquired, namely, black-and-white checkerboards with different rotation angles and inclination angles are respectively arranged in each nine squares, and the images are captured. And (3) for the acquired image, calculating and taking an internal reference of the camera by using a Zhang calibration method.
The Zhang's calibration method is a camera calibration method (A Flexible New Technique for Camera Calibration) based on a calibration plane. The calibration flow comprises the following steps: the printing calibration plate is stuck on a plane; shooting a group of images in different postures by moving the calibration plate or the camera; detecting a calibration point on the image; solving a closed solution of the internal reference matrix and the external reference matrix; obtaining distortion parameters; and taking the internal and external parameter matrix and the distortion coefficient as initial values, and optimizing by adopting a maximum likelihood estimation method to obtain a final solution, namely the internal parameters of the camera.
And then the camera radar is jointly calibrated to obtain a transformation matrix of the radar coordinate projection to the pixel coordinate: and acquiring a color image (calibration image) and point cloud data (calibration point cloud data) of the calibration scene, and acquiring three-dimensional coordinates of each point and corresponding pixel coordinates (two-dimensional coordinates) of each point in the color image for each point in the calibration scene. Further, according to the internal parameters of the camera, the three-dimensional coordinates and the two-dimensional coordinates of each point, a transformation matrix from the radar coordinate system to the camera coordinate system is solved, namely, a rotation matrix R and an evaluation matrix t are shown in the formula (1). The solution of the problem is a typical Perspotive-n-Point (PnP) problem, and the invention adopts an iteration method based on a Levenberg-Marquardt algorithm to solve the problem.
The calibration scene can be any scene used for calibration, no special requirements exist, and a daily office scene is selected; the point location preferably marks corner points in the scene, such as display corner points, table and chair corner points, carton corner points and the like.
It should be noted that, because the 3D point cloud does not have semantic information, the three-dimensional coordinates of the point locations are difficult to obtain, so that the joint calibration of the camera radar can be performed in a semi-automatic calibration mode. The two-dimensional coordinates of the point positions can be directly read from the calibration image, the three-dimensional coordinates of the point positions are obtained in a semi-automatic mouse interaction mode, in a program interaction interface, a point cloud range where the target point positions are located is selected by using a mouse frame, then a program fits the corner points of the cube, for example, a 3 DHaris operator fits the corner points of the cube, and the three-dimensional coordinates are output.
In one or more optional embodiments of the present invention, after determining whether there is a monitoring result of a person in a dangerous area according to the person position information and each of the dangerous area information, the method further includes:
and storing the monitoring result, and sending out a dangerous alarm and uploading an alarm report to the cloud when the monitoring result indicates that the personnel are in a dangerous area.
And storing the monitoring result every time, and alarming and reminding in time when the monitoring result is that the person is in the dangerous area, namely judging that the person enters the dangerous area or has dangerous operation behaviors, and alarming the target person entering the dangerous area, namely reminding the person through the dangerous alarm. Uploading the alarm map to the cloud for viewing, tracing and counting. And then, acquiring and processing the next two-dimensional image and three-dimensional point cloud data according to the flow.
In addition, referring to fig. 4, fig. 4 is a physical diagram of the security detection device provided by the present invention: the execution subject of the present invention may also be a safety monitoring device, which includes: a protective shell and an inner support system for protecting and fixing the internal module; a camera for acquiring a two-dimensional image; the radar is used for acquiring 3D point cloud data; and the controller is used for controlling the camera and the radar, and carrying out joint processing on the two-dimensional image and the 3D point cloud data to obtain a monitoring result. The safety monitoring equipment is a portable intelligent hardware product, is light and easy to deploy, can be used by only erecting a tripod on an application site, fixing the product, aiming at a monitoring target and electrifying.
The safety monitoring device is a portable intelligent hardware device, and is also a portable analysis device integrating various sensors and software and hardware. The ultra-strong solid light aluminum alloy material is adopted for design, the size is light, the appearance is novel, the water and dust are prevented, the environment adaptability is strong, and the deployment and the maintenance are easy; under small and exquisite volume, integrated multiple hardware equipment, including computational element, laser radar, high definition camera etc. collect data acquisition, analysis and platform show ability in an organic whole, the performance is strong.
The security detection method provided by the present invention is further described below with reference to fig. 5. FIG. 5 is a second flow chart of the security detection method according to the present invention: the security detection is performed using the security detection device shown in fig. 4. Firstly, acquiring a 2D color image and 3D point cloud data; and (3) inputting the 2D color image into a target detection model based on deep learning to perform target detection, so as to obtain rectangular frame position information, namely a prediction frame, of the target object in the 2D color image. And carrying out ground plane fitting and point cloud interception and preprocessing on the 3D point cloud data, wherein the preprocessing is clustering processing. And judging the dangerous area according to the prediction frame and the 3D point cloud data after clustering, namely determining the personnel position information and the dangerous area information. And further, according to the personnel position information and the dangerous area information, determining whether a monitoring result of the personnel in the dangerous area exists, namely alarming judgment. And then, acquiring and processing the next 2D color image and 3D point cloud data according to the flow.
According to the portable intelligent hardware equipment, namely the safety detection equipment, which is supported by the embodiment, engineering construction operation events are monitored continuously, 2D color camera data and 3D point cloud data are obtained in real time, by the analysis and processing method, dangerous behaviors of personnel in engineering construction operation processes such as an excavator operation radius station person, a crane object hanging person and the like can be detected in real time, alarming is carried out in time, and evidence is obtained and uploaded to the cloud, so that the intensity of manual monitoring can be reduced, manual 24-hour high-intensity guard monitoring or inspection is not needed, automatic high-intensity value safety monitoring is realized, alarming is only needed when monitoring results are abnormal, and monitoring cost is reduced; the detection accuracy rate of dangerous events and potential dangerous behaviors can be improved, the detection rate of dangerous events and potential dangerous behaviors can be more than 90%, accident risks and losses are greatly reduced, and production or engineering operation is safe and orderly.
The safety monitoring device provided by the invention is described below, and the safety monitoring device described below and the safety monitoring method described above can be referred to correspondingly.
Fig. 6 is a schematic structural diagram of a safety monitoring device according to the present invention, and as shown in fig. 6, the safety monitoring device 600 includes: an acquisition module 601, a joint processing module 602, and a determination module 603, wherein:
An acquisition module 601 configured to acquire two-dimensional images and three-dimensional point cloud data of a target monitoring scene;
the joint processing module 602 is configured to perform joint positioning on the two-dimensional image and the three-dimensional point cloud data to obtain personnel position information and each dangerous area information in the target monitoring scene;
a determining module 603 is configured to determine whether there is a monitoring result that a person is in a dangerous area according to the person position information and each dangerous area information.
According to the safety monitoring device, two-dimensional images and three-dimensional point cloud data of a target monitoring scene are obtained; the two-dimensional image and the three-dimensional point cloud data are subjected to joint positioning to obtain personnel position information and each dangerous area information in the target monitoring scene; and determining whether monitoring results of the personnel in the dangerous area exist according to the personnel position information and the dangerous area information. The invention can reduce the intensity of manual monitoring, does not need manual 24-hour high-intensity guard monitoring or inspection, realizes automatic high-intensity value safety monitoring, only needs alarming when the monitoring result is abnormal, and reduces the monitoring cost; the detection accuracy rate of dangerous events and potential dangerous behaviors can be improved, the detection rate of dangerous events and potential dangerous behaviors can be more than 90%, accident risks and losses are greatly reduced, and production or engineering operation is safe and orderly.
In one or more alternative embodiments of the invention, the joint processing module 602 is further configured to:
performing target detection on the two-dimensional image, and determining each target object in the two-dimensional image and a detection frame corresponding to each target object, wherein the target objects comprise personnel and engineering facility products;
matching the three-dimensional point cloud data to determine point clouds corresponding to the detection frames;
and carrying out personnel positioning on the point clouds corresponding to the personnel, determining personnel position information in the target monitoring scene, carrying out dangerous area positioning on the point clouds of each engineering facility, and determining information of each dangerous area in the target monitoring scene.
In one or more alternative embodiments of the invention, the joint processing module 602 is further configured to:
taking the ground plane as a reference plane, and performing ground plane fitting processing on the three-dimensional point cloud data;
and carrying out space conversion processing on the three-dimensional point cloud data, and determining point clouds corresponding to each detection frame.
In one or more optional embodiments of the invention, the two-dimensional image is acquired based on a camera, and the three-dimensional point cloud data is acquired based on radar;
Accordingly, the joint processing module 602 is further configured to:
projecting the three-dimensional point cloud data to the two-dimensional image according to the transformation matrix corresponding to the radar and the camera, and determining the position of each point cloud in the three-dimensional point cloud data in the two-dimensional image;
and determining each point cloud of the position in the detection frame as a point cloud set corresponding to the detection frame aiming at each detection frame in the two-dimensional image.
In one or more alternative embodiments of the invention, the engineering utilities include mobile utilities and stationary utilities;
the joint processing module 602 is further configured to:
clustering each point cloud in the point cloud set in depth aiming at each point cloud set to obtain a plurality of point cloud clusters corresponding to the point cloud sets;
the point cloud cluster closest to the camera and/or the radar in the point cloud clusters corresponding to the person is used as a first target point cloud cluster;
determining the position information of the clustering center of the first target point cloud cluster as personnel position information;
for each mobile facility, taking the point cloud cluster closest to the camera and/or the radar in the point cloud clusters corresponding to the mobile facility as a second target point cloud cluster; determining the width of the mobile facility according to the second target point cloud cluster; determining dangerous area information corresponding to the mobile facility in the target monitoring scene according to the width;
For each stationary facility, each point cloud cluster corresponding to the stationary facility includes the point cloud cluster with the largest point cloud as a third target point cloud cluster; determining the working radius and the working height of the stationary facility according to the third target point cloud cluster; and determining dangerous area information corresponding to the fixed facility in the target monitoring scene according to the working radius and the working height.
In one or more optional embodiments of the invention, the two-dimensional image is acquired based on a camera, and the three-dimensional point cloud data is acquired based on radar;
accordingly, the safety monitoring device 600 further comprises a calibration module configured to:
dividing the field of view of the camera into a nine-square grid;
collecting a plurality of images of different poses of a standard black-and-white checkerboard in the nine-grid;
calibrating the camera by using a Zhang calibration method to obtain an internal reference of the camera;
obtaining a calibration image and calibration point cloud data of a calibration scene;
acquiring two-dimensional coordinates of each calibration point in the calibration scene from the calibration image, and acquiring three-dimensional coordinates of each calibration point from the calibration point cloud data;
And determining transformation matrixes corresponding to the radar and the camera according to the internal parameters of the camera, the two-dimensional coordinates and the three-dimensional coordinates of each calibration point, wherein the transformation matrixes are used for determining personnel position information and each dangerous area information in the target monitoring scene.
In one or more alternative embodiments of the invention, the security monitoring device 600 further includes a storage module configured to:
and storing the monitoring result, and sending out a dangerous alarm and uploading an alarm report to the cloud when the monitoring result indicates that the personnel are in a dangerous area.
Fig. 7 illustrates a physical schematic diagram of an electronic device, as shown in fig. 7, which may include: processor 710, communication interface (Communications Interface) 720, memory 730, and communication bus 740, wherein processor 710, communication interface 720, memory 730 communicate with each other via communication bus 740. Processor 710 may invoke logic instructions in memory 730 to perform a security monitoring method comprising: acquiring a two-dimensional image and three-dimensional point cloud data of a target monitoring scene; the two-dimensional image and the three-dimensional point cloud data are subjected to joint positioning to obtain personnel position information and each dangerous area information in the target monitoring scene; and determining whether monitoring results of the personnel in the dangerous area exist according to the personnel position information and the dangerous area information.
Further, the logic instructions in the memory 730 described above may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program, the computer program being storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of performing the security monitoring method provided by the methods described above, the method comprising: acquiring a two-dimensional image and three-dimensional point cloud data of a target monitoring scene; the two-dimensional image and the three-dimensional point cloud data are subjected to joint positioning to obtain personnel position information and each dangerous area information in the target monitoring scene; and determining whether monitoring results of the personnel in the dangerous area exist according to the personnel position information and the dangerous area information.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the security monitoring method provided by the above methods, the method comprising: acquiring a two-dimensional image and three-dimensional point cloud data of a target monitoring scene; the two-dimensional image and the three-dimensional point cloud data are subjected to joint positioning to obtain personnel position information and each dangerous area information in the target monitoring scene; and determining whether monitoring results of the personnel in the dangerous area exist according to the personnel position information and the dangerous area information.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of security monitoring, comprising:
acquiring a two-dimensional image and three-dimensional point cloud data of a target monitoring scene;
the two-dimensional image and the three-dimensional point cloud data are subjected to joint positioning to obtain personnel position information and each dangerous area information in the target monitoring scene;
and determining whether monitoring results of the personnel in the dangerous area exist according to the personnel position information and the dangerous area information.
2. The method of claim 1, wherein the step of jointly locating the two-dimensional image and the three-dimensional point cloud data to obtain the personnel position information and the danger area information in the target monitoring scene includes:
performing target detection on the two-dimensional image, and determining each target object in the two-dimensional image and a detection frame corresponding to each target object, wherein the target objects comprise personnel and engineering facility products;
matching the three-dimensional point cloud data to determine point clouds corresponding to the detection frames;
and carrying out personnel positioning on the point clouds corresponding to the personnel, determining personnel position information in the target monitoring scene, carrying out dangerous area positioning on the point clouds of each engineering facility, and determining information of each dangerous area in the target monitoring scene.
3. The method of claim 2, wherein the matching the three-dimensional point cloud data comprises:
taking the ground plane as a reference plane, and performing ground plane fitting processing on the three-dimensional point cloud data;
and carrying out space conversion processing on the three-dimensional point cloud data, and determining point clouds corresponding to each detection frame.
4. A safety monitoring method according to claim 3, wherein the two-dimensional image is acquired based on a camera and the three-dimensional point cloud data is acquired based on radar;
correspondingly, the performing spatial conversion processing on the three-dimensional point cloud data to determine a point cloud set corresponding to each detection frame includes:
projecting the three-dimensional point cloud data to the two-dimensional image according to the transformation matrix corresponding to the radar and the camera, and determining the position of each point cloud in the three-dimensional point cloud data in the two-dimensional image;
and determining each point cloud of the position in the detection frame as a point cloud set corresponding to the detection frame aiming at each detection frame in the two-dimensional image.
5. The safety monitoring method of claim 4, wherein the engineering utilities include mobile utilities and stationary utilities;
And before the personnel positioning is performed on the point cloud set corresponding to the personnel and the personnel position information in the target monitoring scene is determined, the method further comprises the following steps:
clustering each point cloud in the point cloud set in depth aiming at each point cloud set to obtain a plurality of point cloud clusters corresponding to the point cloud sets;
correspondingly, the step of locating the person on the point cloud set corresponding to the person, and the step of determining the person position information in the target monitoring scene includes:
the point cloud cluster closest to the camera and/or the radar in the point cloud clusters corresponding to the person is used as a first target point cloud cluster;
determining the position information of the clustering center of the first target point cloud cluster as personnel position information;
correspondingly, the dangerous area positioning of the point cloud set of each engineering facility product, and the determining of each dangerous area information in the target monitoring scene comprises the following steps:
for each mobile facility, taking the point cloud cluster closest to the camera and/or the radar in the point cloud clusters corresponding to the mobile facility as a second target point cloud cluster; determining the width of the mobile facility according to the second target point cloud cluster; determining dangerous area information corresponding to the mobile facility in the target monitoring scene according to the width;
For each stationary facility, each point cloud cluster corresponding to the stationary facility includes the point cloud cluster with the largest point cloud as a third target point cloud cluster; determining the working radius and the working height of the stationary facility according to the third target point cloud cluster; and determining dangerous area information corresponding to the fixed facility in the target monitoring scene according to the working radius and the working height.
6. The safety monitoring method according to any one of claims 1 to 5, wherein the two-dimensional image is acquired based on a camera, and the three-dimensional point cloud data is acquired based on a radar;
correspondingly, before the two-dimensional image and the three-dimensional point cloud data of the target monitoring scene are acquired, the method further comprises:
dividing the field of view of the camera into a nine-square grid;
collecting a plurality of images of different poses of a standard black-and-white checkerboard in the nine-grid;
calibrating the camera by using a Zhang calibration method to obtain an internal reference of the camera;
obtaining a calibration image and calibration point cloud data of a calibration scene;
acquiring two-dimensional coordinates of each calibration point in the calibration scene from the calibration image, and acquiring three-dimensional coordinates of each calibration point from the calibration point cloud data;
And determining transformation matrixes corresponding to the radar and the camera according to the internal parameters of the camera, the two-dimensional coordinates and the three-dimensional coordinates of each calibration point, wherein the transformation matrixes are used for determining personnel position information and each dangerous area information in the target monitoring scene.
7. The safety monitoring method according to any one of claims 1 to 5, wherein after determining whether there is a monitoring result of a person in a dangerous area based on the person position information and each of the dangerous area information, further comprising:
and storing the monitoring result, and sending out a dangerous alarm and uploading an alarm report to the cloud when the monitoring result indicates that the personnel are in a dangerous area.
8. A safety monitoring device, comprising:
the acquisition module is configured to acquire two-dimensional images and three-dimensional point cloud data of the target monitoring scene;
the joint processing module is configured to perform joint positioning on the two-dimensional image and the three-dimensional point cloud data to obtain personnel position information and each dangerous area information in the target monitoring scene;
and the determining module is configured to determine whether a monitoring result of the person in the dangerous area exists according to the person position information and the dangerous area information.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the security monitoring method of any one of claims 1 to 7 when the program is executed by the processor.
10. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the security monitoring method according to any one of claims 1 to 7.
CN202311236395.0A 2023-09-25 2023-09-25 Safety monitoring method and device Active CN116973939B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311236395.0A CN116973939B (en) 2023-09-25 2023-09-25 Safety monitoring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311236395.0A CN116973939B (en) 2023-09-25 2023-09-25 Safety monitoring method and device

Publications (2)

Publication Number Publication Date
CN116973939A true CN116973939A (en) 2023-10-31
CN116973939B CN116973939B (en) 2024-02-06

Family

ID=88479989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311236395.0A Active CN116973939B (en) 2023-09-25 2023-09-25 Safety monitoring method and device

Country Status (1)

Country Link
CN (1) CN116973939B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020243962A1 (en) * 2019-06-06 2020-12-10 深圳市大疆创新科技有限公司 Object detection method, electronic device and mobile platform
CN112630792A (en) * 2020-11-30 2021-04-09 深圳供电局有限公司 Power grid transmission line working condition simulation and dangerous point detection method and detection system
CN114296101A (en) * 2018-08-30 2022-04-08 韦奥机器人股份有限公司 Depth sensing computer vision system
CN114444158A (en) * 2020-11-04 2022-05-06 北京瓦特曼科技有限公司 Underground roadway deformation early warning method and system based on three-dimensional reconstruction
KR102405647B1 (en) * 2022-03-15 2022-06-08 헬리오센 주식회사 Space function system using 3-dimensional point cloud data and mesh data
CN115597659A (en) * 2022-09-21 2023-01-13 山东锐翊电力工程有限公司(Cn) Intelligent safety management and control method for transformer substation
CN115661337A (en) * 2022-09-23 2023-01-31 安徽南瑞继远电网技术有限公司 Binocular vision-based three-dimensional reconstruction method for transformer substation operating personnel
CN115760976A (en) * 2022-11-08 2023-03-07 国网江西省电力有限公司超高压分公司 Transformer substation non-contact non-inductive transformation operation risk identification method
CN115908524A (en) * 2022-11-02 2023-04-04 国网山西省电力公司大同供电公司 Construction operation safety control method based on 3D distance perception
CN116206255A (en) * 2023-01-06 2023-06-02 广州纬纶信息科技有限公司 Dangerous area personnel monitoring method and device based on machine vision

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114296101A (en) * 2018-08-30 2022-04-08 韦奥机器人股份有限公司 Depth sensing computer vision system
WO2020243962A1 (en) * 2019-06-06 2020-12-10 深圳市大疆创新科技有限公司 Object detection method, electronic device and mobile platform
CN114444158A (en) * 2020-11-04 2022-05-06 北京瓦特曼科技有限公司 Underground roadway deformation early warning method and system based on three-dimensional reconstruction
CN112630792A (en) * 2020-11-30 2021-04-09 深圳供电局有限公司 Power grid transmission line working condition simulation and dangerous point detection method and detection system
KR102405647B1 (en) * 2022-03-15 2022-06-08 헬리오센 주식회사 Space function system using 3-dimensional point cloud data and mesh data
CN115597659A (en) * 2022-09-21 2023-01-13 山东锐翊电力工程有限公司(Cn) Intelligent safety management and control method for transformer substation
CN115661337A (en) * 2022-09-23 2023-01-31 安徽南瑞继远电网技术有限公司 Binocular vision-based three-dimensional reconstruction method for transformer substation operating personnel
CN115908524A (en) * 2022-11-02 2023-04-04 国网山西省电力公司大同供电公司 Construction operation safety control method based on 3D distance perception
CN115760976A (en) * 2022-11-08 2023-03-07 国网江西省电力有限公司超高压分公司 Transformer substation non-contact non-inductive transformation operation risk identification method
CN116206255A (en) * 2023-01-06 2023-06-02 广州纬纶信息科技有限公司 Dangerous area personnel monitoring method and device based on machine vision

Also Published As

Publication number Publication date
CN116973939B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN113345019B (en) Method, equipment and medium for measuring potential hazards of transmission line channel target
CN110674746B (en) Method and device for realizing high-precision cross-mirror tracking by using video spatial relationship assistance, computer equipment and storage medium
CN109978755B (en) Panoramic image synthesis method, device, equipment and storage medium
WO2018028103A1 (en) Unmanned aerial vehicle power line inspection method based on characteristics of human vision
CN110850723B (en) Fault diagnosis and positioning method based on transformer substation inspection robot system
CN112418103B (en) Bridge crane hoisting safety anti-collision system and method based on dynamic binocular vision
CN112085003B (en) Automatic recognition method and device for abnormal behaviors in public places and camera equipment
CN109448326B (en) Geological disaster intelligent group defense monitoring system based on rapid image recognition
CN114241298A (en) Tower crane environment target detection method and system based on laser radar and image fusion
CN102622767A (en) Method for positioning binocular non-calibrated space
WO2020135187A1 (en) Unmanned aerial vehicle recognition and positioning system and method based on rgb_d and deep convolutional network
CN113452912A (en) Pan-tilt camera control method, device, equipment and medium for inspection robot
CN105516661B (en) Principal and subordinate's target monitoring method that fisheye camera is combined with ptz camera
CN115019254A (en) Method, device, terminal and storage medium for detecting foreign matter invasion in power transmission area
CN112102395A (en) Autonomous inspection method based on machine vision
CN115760976A (en) Transformer substation non-contact non-inductive transformation operation risk identification method
CN102291568A (en) Accelerated processing method of large-view-field intelligent video monitoring system
CN115880231A (en) Power transmission line hidden danger detection method and system based on deep learning
CN109360269B (en) Ground three-dimensional plane reconstruction method based on computer vision
CN113965733A (en) Binocular video monitoring method, system, computer equipment and storage medium
CN116973939B (en) Safety monitoring method and device
CN110992291B (en) Ranging method, system and storage medium based on three-eye vision
CN112802100A (en) Intrusion detection method, device, equipment and computer readable storage medium
CN115861407A (en) Safe distance detection method and system based on deep learning
CN104618686A (en) Outdoor billboard use condition real-time monitoring method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant