CN114973140A - Dangerous area personnel intrusion monitoring method and system based on machine vision - Google Patents

Dangerous area personnel intrusion monitoring method and system based on machine vision Download PDF

Info

Publication number
CN114973140A
CN114973140A CN202210658001.XA CN202210658001A CN114973140A CN 114973140 A CN114973140 A CN 114973140A CN 202210658001 A CN202210658001 A CN 202210658001A CN 114973140 A CN114973140 A CN 114973140A
Authority
CN
China
Prior art keywords
dangerous
person
monitoring image
target
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210658001.XA
Other languages
Chinese (zh)
Inventor
陈钊
王斌
张云
石志海
朱旗
盛津芳
杨明
罗婷倚
韦才超
刘庆忠
杜鑫
廖峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Beitou Highway Construction Investment Group Co ltd
Central South University
Original Assignee
Guangxi Beitou Highway Construction Investment Group Co ltd
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Beitou Highway Construction Investment Group Co ltd, Central South University filed Critical Guangxi Beitou Highway Construction Investment Group Co ltd
Priority to CN202210658001.XA priority Critical patent/CN114973140A/en
Publication of CN114973140A publication Critical patent/CN114973140A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a dangerous area personnel intrusion monitoring method and system based on machine vision, which judges whether a dangerous target exists in a target construction area by acquiring a monitoring image of the target construction area and inputting the monitoring image of the target construction area into a preset dangerous target detection model: if the dangerous target exists in the target construction area, acquiring coordinates of the dangerous target, and determining the coordinates of the dangerous area according to the coordinates of the dangerous target; identifying the personnel coordinates in the monitoring image, comparing the personnel coordinates with the coordinates of the dangerous area, judging whether personnel exist in the dangerous area, and if so, sending an alarm signal to a user. The invention can automatically identify the dangerous area and judge whether a person exists in the dangerous area, thereby reducing the labor cost of safety supervision and improving the monitoring efficiency.

Description

Dangerous area personnel intrusion monitoring method and system based on machine vision
Technical Field
The invention relates to the field of construction site production safety, in particular to a dangerous area personnel intrusion monitoring method and system based on machine vision.
Background
Scene target object detection in image videos has become a research hotspot in the fields of artificial intelligence and computer vision at present. The production safety problem is always a problem with extremely high social attention, and nearly millions of safety accidents each year bring huge pressure on the society and families. At a construction site, many safety accidents are caused by violation of regulations of workers. The pot hole is a place which is easy to cause personnel damage in industrial construction, and has important significance on the life safety of workers. However, under the condition of complex site construction, managers are relaxed and safety awareness of constructors is lacked, a pot hole is extremely dangerous, and is not noticed slightly, and falling accidents are easily caused by fall of more than ten meters or even higher. If not managed too much, great loss is brought to construction personnel and units.
In conventional site production safety, the following method is often used for dangerous areas: hanging hazard signs, fencing, field supervisor supervision, etc., all of which have their own drawbacks and deficiencies. For example, hanging a dangerous marker may cause that a person carelessly does not notice the dangerous marker to the great extent and mistakenly enters a dangerous area, but at the moment, a timely reminding and early warning mechanism is not provided, and a great loss is most likely caused by carelessness at a moment; aiming at setting up a fence, if someone intentionally breaks into the fence illegally to destroy a construction area, and managers may not find the fence in time, huge property loss is caused to a construction site; if the supervisors are set up on the site, the setting up personnel waste manpower due to the fact that the personnel are loose, careless and the like. In general, the methods cannot adapt to the application and the requirement of the intelligent construction site under the current environment of high-speed development of artificial intelligence, so that the optimization of the method is urgently needed.
Disclosure of Invention
The invention provides a dangerous area personnel intrusion monitoring method and system based on machine vision, which are used for solving the technical problems of low efficiency and high labor cost of the existing construction site safety supervision method.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
a dangerous area personnel intrusion monitoring method based on machine vision comprises the following steps:
acquiring a monitoring image of a target construction area, inputting the monitoring image of the target construction area into a preset dangerous target detection model, and judging whether a dangerous target exists in the target construction area: if the dangerous target exists in the target construction area, acquiring coordinates of the dangerous target, and determining the coordinates of the dangerous area according to the coordinates of the dangerous target;
identifying the personnel coordinates in the monitoring image, comparing the personnel coordinates with the coordinates of the dangerous area, judging whether personnel exist in the dangerous area, and if so, sending an alarm signal to a user.
Preferably, the dangerous target detection model takes a Yolov4 network as a basic framework, a training sample is a monitoring image for marking dangerous target types and detection frames, the input quantity is the monitoring image, and the output quantity is the monitoring image for marking dangerous target types and prediction frames thereof.
Preferably, the monitoring image of the target construction area is input into a preset dangerous target detection model, and whether a dangerous target exists in the target construction area is judged, specifically:
the first step is as follows: adjusting the image size of the monitoring image to p x p, wherein p is an integral multiple of 32;
the second step is that: dividing the monitored image after size adjustment into grids with the size of s, distributing B prediction frame bounding boxes needing to be predicted for each grid, and carrying out a training model through yolov4 to obtain the value of the position, the category information c and the confidence coefficient corresponding to each bounding box;
the self position of the prediction box bounding box is marked as (x, y, w, h), wherein x and y represent the coordinates of the central point of the prediction box, and w and h represent the length and width of the prediction box; the confidence definition is calculated as:
Figure BDA0003689147140000021
wherein the content of the first and second substances,
Figure BDA0003689147140000022
represents the confidence of the jth prediction box of the ith grid, Pr (object) represents the probability of whether the current prediction box has dangerous objects or not,
Figure BDA0003689147140000023
representing the IOU ratio between the real detection box and the prediction detection box; each grid also predicts C conditional class probabilities: pr (Class) i |object);
The probability of a certain class appearing in the prediction box and the target degree of fit expression of the prediction box are as follows:
Figure BDA0003689147140000024
wherein, Class i Representing the ith class category;
the third step: normalizing the self-position coordinates (X, Y, W, H) of the prediction frame obtained by calculation in the second step to obtain normalized position coordinates (X, Y, W, H);
the fourth step: and carrying out non-maximum suppression processing on the prediction frame bounding box with the confidence level meeting the threshold value in the image, and labeling the monitoring image with the dangerous target category and the prediction frame thereof.
Preferably, each dangerous target in the target construction area is provided with a corresponding warning mark; the dangerous target detection model identifies each dangerous target by extracting the characteristic identification of each dangerous target and the characteristic of the corresponding warning mark.
Preferably, the risk target includes any one or a combination of the following: high voltage electrical equipment, goods or locations, flammable and explosive goods, equipment or locations, and hazardous work areas; acquiring coordinates of a dangerous target, and determining coordinates of a dangerous area according to the coordinates of the dangerous target, wherein the method specifically comprises the following steps:
and extracting coordinates of the dangerous target prediction frame, determining a safe distance according to the category of the dangerous target, and dividing a dangerous area by taking the coordinates of the dangerous target prediction frame as a center and the safe distance as a radius.
Preferably, the step of comparing the coordinates of the person with the coordinates of the dangerous area to determine whether the person is in the dangerous area includes the steps of:
and calculating the contact ratio of the personnel and the dangerous area according to the coordinates of the personnel and the dangerous area:
Figure BDA0003689147140000031
wherein R is person For the object detector to detect the coordinate range of the person in the image, R riskarea Automatically demarcating the coordinate range of the hazardous area in the image for the detection of the object detector, J area Indicating the coincidence degree of the two;
and judging whether the personnel are in the dangerous area or not through a threshold function based on the contact ratio of the personnel and the dangerous area:
Figure BDA0003689147140000032
wherein, F area Which indicates whether a person is judged to be in a dangerous area, and t indicates a threshold value for the degree of coincidence of the dangerous area and the person in the image.
Preferably, the method further comprises the following steps:
after judging that the personnel are in the dangerous area, tracking the personnel by using a depsort target tracking algorithm:
step 1: allocating a tracking index set Track indexes T ═ 1., N } and a Detection index set Detection indexes D ═ 1.,. M }, and initializing a maximum cycle Detection frame number A max (ii) a Wherein, 1, the. 1, the characteristics of the Mth person are respectively the characteristics of the 1 st person, the Mth person in the next monitoring image;
step 2: calculating the previous frame of monitoring imageThe cost matrix C of the characteristics of the ith person and the characteristics of the jth person in the next monitoring image is [ C [ ] i,j ]Wherein, i is 1, 1., N, j is 1, 1., M;
and step 3: calculating a cost matrix B (B) of a square Markov distance between the position of the tracking frame average track corresponding to the ith personal feature in the previous monitoring image and the actual detection frame bounding box corresponding to the jth personal feature in the next monitoring image predicted by Kalman i,j ];
And 4, step 4: judging the threshold value twice, and enabling the average mahalanobis distance between the tracking frame and the detection frame in the cosine cost matrix to be larger than the threshold value
Figure BDA0003689147140000033
Is set to infinity, and the cosine distance is greater than a threshold value
Figure BDA0003689147140000034
Is set to be larger;
and 5: matching the tracking frame and the detection frame by using a Hungarian algorithm, and returning a matching result;
step 6: screening the matching result, and deleting the pair matching with larger cosine distance;
and 7: the current cycle detection frame number is larger than the maximum cycle detection frame number A max And if so, obtaining a preliminary matching result, otherwise, executing the step 2.
Preferably, the weight of the hungarian algorithm is weighted according to the degree of motion matching and the degree of appearance matching:
calculating the motion matching degree between the motion characteristics of the ith person in the previous monitoring image and the motion characteristics of the jth person in the next monitoring image
Figure BDA0003689147140000041
Wherein the degree of motion matching d (1) The calculation formula is as follows:
Figure BDA0003689147140000042
wherein d is (1) (i, j) 1 represents that the ith person in the previous monitoring image and the jth person in the next monitoring image are wired, 0 represents wireless, and the expression value represents the motion matching degree between the jth detection frame and the ith track;
Figure BDA0003689147140000043
the method is characterized in that the method is an inverse matrix of a covariance matrix of an observation space at the current moment, wherein the track is obtained by prediction of a Kalman filter; d j Is the bounding box of the jth detection box; y is i Is the predicted bounding box of the track at the current time;
matching the motion to a degree
Figure BDA0003689147140000044
Inputting the motion characteristics of the ith person in the previous monitoring image and the motion characteristics of the jth person in the next monitoring image into a preset motion matching degree threshold function, and judging whether the association between the motion characteristics of the ith person in the previous monitoring image and the motion characteristics of the jth person in the next monitoring image is successful or not;
wherein, the threshold function of the motion matching degree is as follows:
Figure BDA0003689147140000045
wherein is made of
Figure BDA0003689147140000046
To determine the initial matching connection, t (1) A threshold value set for the degree of motion matching; when d is (1) (i,j)≤t (1) Representing that the motion characteristic of the ith person in the previous monitoring image is successfully associated with the motion characteristic of the jth person in the next monitoring image;
calculating the appearance matching degree d of the appearance characteristics of the ith person in the previous frame of monitoring image and the appearance characteristics of the jth person in the next frame of monitoring image (2) (i, j), wherein the degree of appearance match d (2) (i, j) is calculated as:
Figure BDA0003689147140000047
wherein r is j In order to characterize the factors for the surface features,
Figure BDA0003689147140000048
for storing up-to-date L k The description factor of each track is a function of,
Figure BDA0003689147140000049
for the ith surface characterization factor of the kth trace,
Figure BDA00036891471400000410
the above formula represents the minimum cosine distance of the ith track and the jth track;
matching the appearance
Figure BDA00036891471400000411
Inputting the result into a preset appearance matching degree threshold function, and judging whether the appearance characteristics of the ith person in the previous frame of monitoring image and the appearance characteristics of the jth person in the next frame of monitoring image are successfully associated, wherein the appearance matching degree threshold function is as follows:
Figure BDA0003689147140000051
wherein, t (2) Indicates a threshold value set for the degree of appearance matching when d (2) (i,j)≤t (2) Indicating that the appearance characteristic of the ith person in the previous frame of monitoring image is successfully associated with the appearance characteristic of the jth person in the next frame of monitoring image;
when the appearance characteristics of the ith person in the previous frame of monitoring image are successfully associated with the motion matching degree and the appearance matching degree of the jth person in the next frame of monitoring image, calculating the comprehensive matching degree c of the appearance characteristics of the ith person in the previous frame of monitoring image and the jth person in the next frame of monitoring image according to the motion matching degree and the appearance matching degree i,j Which isIn, comprehensive matching degree c i,j The calculation formula is as follows:
c i,j =λd (1) (i,j)+(1-λ)d (2) (i,j)
wherein, c i,j Setting lambda as a preset hyper-parameter for the comprehensive matching degree between the appearance characteristics of the ith person in the previous monitoring image and the jth person in the next monitoring image according to actual experience, d (1) (i, j) is the degree of motion matching, d (2) (i, j) is the degree of appearance matching;
according to the motion matching degree threshold function and the appearance matching degree threshold function, calculating a comprehensive matching degree threshold function value b between the appearance characteristics of the ith person in the previous frame of monitoring image and the jth person in the next frame of monitoring image i,j And judging whether the appearance characteristic of the ith person in the previous frame of monitoring image is successfully associated with the characteristic of the jth person in the next frame of monitoring image according to the comprehensive matching degree threshold function value, if so, judging that the appearance characteristic of the ith person in the previous frame of monitoring image is successfully matched with the jth person in the next frame of monitoring image, wherein the comprehensive matching degree threshold function is as follows:
Figure BDA0003689147140000052
wherein, only when b i,j The initial matching is considered to be successful when the number is 1.
Preferably, when it is judged that there is a person in the dangerous area, the method further includes the steps of:
pushing the intrusion picture to a manager, and archiving the intrusion picture for later viewing; and a warning device arranged on the dangerous area site sends out warning prompt for driving away the intruder, so that the intruder is prevented from going deep continuously.
A computer system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method when executing the computer program.
The invention has the following beneficial effects:
1. according to the dangerous area personnel intrusion monitoring method and system based on machine vision, whether a dangerous target exists in a target construction area is judged by acquiring a monitoring image of the target construction area and inputting the monitoring image of the target construction area into a preset dangerous target detection model: if the dangerous target exists in the target construction area, acquiring coordinates of the dangerous target, and determining the coordinates of the dangerous area according to the coordinates of the dangerous target; identifying the personnel coordinates in the monitoring image, comparing the personnel coordinates with the coordinates of the dangerous area, judging whether personnel exist in the dangerous area, and if so, sending an alarm signal to a user. The invention can automatically identify the dangerous area and judge whether a person exists in the dangerous area, thereby reducing the labor cost of safety supervision and improving the monitoring efficiency.
2. In the preferred scheme, when personnel break into a certain dangerous area, the system records the break-in of all the personnel, so that the management personnel can check the information conveniently in the future, meanwhile, the system can send warning information of the break-in of the personnel in the dangerous area to corresponding management personnel in time, the management personnel can make a timely response and work deployment to the warning information, meanwhile, an alarm device is monitored in the dangerous area, buzzing warning or other responses can be generated when the personnel break into the dangerous area, and the personnel can be prevented from going deep continuously; the method and the system can well apply computer vision to the current production life, liberate manpower, improve efficiency, reduce safety accidents, enable producers to be more confident in the existing work, and enable managers to better control the whole situation of a construction site, thereby ensuring production safety better.
3. In a preferred scheme, the single-step target detection algorithm is adopted, the generation selection area in the first stage is skipped, the class probability and the position coordinate value of the object are directly generated, and the final detection result is directly obtained through single detection.
4. In the preferred scheme, the method adopts a deep real-time multi-target tracking method, only once statistical operation is carried out on the same target, and the statistical errors can be well avoided under the condition of shielding.
In addition to the objects, features and advantages described above, other objects, features and advantages of the present invention are also provided. The present invention will be described in further detail below with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a method for monitoring human intrusion into a dangerous area based on machine vision according to the present invention;
FIG. 2 is a system hardware network architecture in a preferred embodiment of the present invention;
FIG. 3 is a diagram of wifi version monitoring data transmission hardware in a preferred embodiment of the present invention;
FIG. 4 is a flow chart of IOU matching in a preferred embodiment of the present invention;
FIG. 5 is a flow chart of cascaded matching in a preferred embodiment of the present invention;
fig. 6 is a flow chart of the server obtaining and processing data in the preferred embodiment of the present invention.
Detailed Description
The embodiments of the invention will be described in detail below with reference to the drawings, but the invention can be implemented in many different ways as defined and covered by the claims.
The first embodiment is as follows:
as shown in FIG. 1, the invention discloses a dangerous area personnel intrusion monitoring method based on machine vision, which comprises the following steps:
acquiring a monitoring image of a target construction area, inputting the monitoring image of the target construction area into a preset dangerous target detection model, and judging whether a dangerous target exists in the target construction area: if the dangerous target exists in the target construction area, acquiring coordinates of the dangerous target, and determining the coordinates of the dangerous area according to the coordinates of the dangerous target;
identifying the personnel coordinates in the monitoring image, comparing the personnel coordinates with the coordinates of the dangerous area, judging whether personnel exist in the dangerous area, and if so, sending an alarm signal to a user.
In addition, in this embodiment, the present invention further discloses a computer system, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps of the method are implemented.
The invention can automatically identify the dangerous area and judge whether a person exists in the dangerous area, thereby reducing the labor cost of safety supervision and improving the monitoring efficiency.
Example two:
the second embodiment is an extended embodiment of the first embodiment, and is different from the first embodiment in that specific steps of the dangerous area personnel intrusion monitoring method based on machine vision are refined.
In this embodiment, as shown in fig. 6, a method for automatically generating monitoring of intrusion of people into construction dangerous area based on computer vision is disclosed, which is applied to a system for monitoring intrusion of people into dangerous area based on machine vision as shown in fig. 2 and 3, and the specific implementation steps are as follows:
step 1: transmitting the high-definition camera video stream of each key construction unit of the construction site to a server;
step 2: and (3) according to the video stream data obtained in the step (1), identifying the dangerous source of the key construction unit in the video through a target detection algorithm, and then generating a corresponding dangerous area. According to the method, only the video is transmitted to the model, the model can automatically identify the danger source, and the corresponding danger area is divided according to the danger source.
For the object detector to automatically identify the hazardous area, it is first necessary to determine which areas belong to the hazardous area. Common hazardous areas on construction sites include: high-voltage electrical equipment, articles or places, flammable and explosive articles, equipment or places, and dangerous operation areas, wherein the high-voltage electrical equipment comprises a transformer and a high-voltage box; flammable and explosive articles, equipment or places including oil depots and other flammable object storage places; the dangerous operation area comprises a foundation pit;
after the dangerous area categories suitable for the construction site are determined, in order to enable a computer to automatically identify dangerous areas, effective data sets of the dangerous areas need to be constructed, and the quality of the data sets determines the accuracy of automatic generation of the dangerous areas by a subsequent computer vision target detection model.
In order to better identify the dangerous area, a specific warning mark is set in the specified dangerous area, and in images extracted by a subsequent camera, the occurrence of the warning mark can also assist the target detection model to perform more accurate and automatic dangerous area division.
After the data set is collected, the model needs to be constructed. For the automatic identification of the dangerous area, a model framework of Yolov4 is adopted, and the method specifically comprises the following steps:
the first step is as follows: acquiring image information of each frame transmitted by a camera, and adjusting the image size to p x p, wherein p is an integral multiple of 32;
the second step is that: dividing the image obtained in the first step into grids with the size of s, distributing B prediction frames bounding box needing to be predicted for each grid, and carrying out a training model through yolov4 to obtain the value of the position, the category information c and the confidence coefficient corresponding to each bounding box;
wherein the self position of the bounding box is marked as (x, y, w, h), wherein x and y represent the coordinates of the central point of the prediction box, and w and h represent the length and width of the prediction box;
wherein the confidence definition is calculated as:
Figure BDA0003689147140000081
wherein the content of the first and second substances,
Figure BDA0003689147140000082
represents the confidence of the jth bounding box of the ith lattice, Pr (object) represents the probability of whether the current box has an object,
Figure BDA0003689147140000083
representing the IOU ratio between the real detection box and the prediction detection box, wherein the IOU matching process is shown in FIG. 4;
each grid also predicts C conditional class probabilities: pr (Class) i |object)。
The probability of a certain class appearing in the box and the target degree of fit of the prediction box are expressed as follows:
Figure BDA0003689147140000084
wherein, Class i Representing the ith class category;
the third step: normalizing the self-position coordinates (X, Y, W, H) of the prediction frame obtained by calculation in the second step to obtain normalized position coordinates (X, Y, W, H);
the fourth step: performing non-maximum suppression processing (NMS) on the prediction box with the confidence level in the image meeting the threshold;
the fifth step: after the processing of the steps, the target detection algorithm identifies the coordinates and the category of the target with danger, and generates a larger-range dangerous area containing the target according to the obtained target position to be displayed in the image.
And step 3: and (3) binding the dangerous area information identified in the step (2) with the corresponding cameras, and transmitting the position information of the dangerous area to the person target detection model.
And 4, step 4: the character target detection model judges the received picture and judges whether a person exists in the picture; if yes, whether the position information of the person is in the dangerous area is judged.
In order to determine whether the person is inside the dangerous area, two-object coincidence judgment needs to be carried out on the person and the dangerous area.
The calculation formula of the superposition of the two objects in the method is as follows:
Figure BDA0003689147140000091
wherein R is person For the object detector to detect the range of a person in the image, R riskarea Automatic segmentation of the regions of danger in the image for detection by the object detector, J area Indicating the coincidence degree of the two;
for the coincidence ratio of the two, a threshold function is set:
F area =1[J area ≥t]
wherein, F area Which indicates whether or not a person is judged to be in the dangerous area, and t indicates a threshold value for the degree of coincidence of the dangerous area and the person in the image.
And 5: for the situation that people are in the dangerous area, the system automatically pushes information to a construction site manager to process the event in time; and recording the information of the person intruding into the dangerous area, wherein the information comprises time, place, the video of the person intruding into the small section and the like; meanwhile, a response mechanism is arranged near the camera corresponding to the dangerous area, and when a person breaks into the dangerous area, the response mechanism is triggered to timely dissuade the person.
Step 6: after the previous steps are carried out, after the personnel intrusion of the heavy construction unit is identified, multi-target tracking needs to be carried out on the intruded personnel, the same target is only subjected to one-time alarm in the video stream every time, the construction is not influenced by multiple alarms, and in order to achieve the effect that the alarm is only generated once and the target of the same video stream is not subjected to multiple triggering alarm, the invention tracks the multiple targets by adopting a depsort algorithm, and when the target enters the video identification range, the target is uniquely marked.
The method for calculating the similarity matrix in the upper half part of the multi-target tracking algorithm uses an appearance model (ReiD) and a motion model (Mahalanobis distance) to calculate the similarity to obtain a cost matrix, and the other one is a gating matrix used for limiting an overlarge value in the cost matrix; in the lower half, the data association of the cascade matching as shown in fig. 5 is used, so that the occluded target can be retrieved again, and the number of ID Switch times of the occluded and then reappeared target is reduced.
Specifically, the depsort algorithm comprises the following steps:
s61: allocating a tracking index set Track indexes T ═ 1., N } and a Detection index set Detection indexes D ═ 1.,. M }, and initializing a maximum cycle Detection frame number A max (ii) a Wherein, 1, the. 1, the characteristics of the Mth person are respectively the characteristics of the 1 st person, the Mth person in the next monitoring image;
s62: calculating a cost matrix C of the characteristics of the ith person in the previous monitoring image and the characteristics of the jth person in the next monitoring image [ C [ ] i,j ]Wherein, i is 1, 1., N, j is 1, 1., M;
s63: calculating a cost matrix B (B) of a square Markov distance between the position of the tracking frame average track corresponding to the ith personal feature in the previous monitoring image and the actual detection frame bounding box corresponding to the jth personal feature in the next monitoring image predicted by Kalman i,j ];
S64: judging the threshold value twice, and enabling the average mahalanobis distance between the tracking frame and the detection frame in the cosine cost matrix to be larger than the threshold value
Figure BDA0003689147140000101
Is set to infinity, and the cosine distance is greater than a threshold value
Figure BDA0003689147140000102
Is set to be larger;
s65: matching the tracking frame and the detection frame by using a Hungary algorithm, and returning a matching result;
specifically, the weight of the Hungarian algorithm is weighted according to the motion matching degree and the appearance matching degree:
calculating the motion matching degree between the motion characteristics of the ith person in the previous monitoring image and the motion characteristics of the jth person in the next monitoring image
Figure BDA0003689147140000103
Wherein the degree of motion matching d (1) The calculation formula is as follows:
Figure BDA0003689147140000104
wherein d is (1) (i, j) 1 represents that the ith person in the previous monitoring image and the jth person in the next monitoring image are wired, 0 represents wireless, and the expression value represents the motion matching degree between the jth detection frame and the ith track;
Figure BDA0003689147140000105
the method is characterized in that the method is an inverse matrix of a covariance matrix of an observation space at the current moment, wherein the track is obtained by prediction of a Kalman filter; d j Is the bounding box of the jth detection box; y is i Is the predicted bounding box of the track at the current time;
matching the motion to a degree
Figure BDA0003689147140000106
Inputting the motion characteristics of the ith person in the previous monitoring image and the motion characteristics of the jth person in the next monitoring image into a preset motion matching degree threshold function, and judging whether the association between the motion characteristics of the ith person in the previous monitoring image and the motion characteristics of the jth person in the next monitoring image is successful or not;
wherein, the threshold function of the motion matching degree is as follows:
Figure BDA0003689147140000107
wherein is made of
Figure BDA0003689147140000111
To determine the initial matching connection, t (1) A threshold value set for the degree of motion matching; when d is (1) (i,j)≤t (1) Representing the motion characteristics of the ith person in the previous monitoring image and the next frameThe motion characteristic association of the jth personnel in the monitoring image is successful;
calculating the appearance matching degree d of the appearance characteristics of the ith person in the previous frame of monitoring image and the appearance characteristics of the jth person in the next frame of monitoring image (2) (i, j), wherein the degree of appearance match d (2) (i, j) is calculated as:
Figure BDA0003689147140000112
wherein r is j In order to characterize the factors for the surface features,
Figure BDA0003689147140000113
for storing up-to-date L k The description factor of each track is a function of,
Figure BDA0003689147140000114
for the ith surface characterization factor of the kth trace,
Figure BDA0003689147140000115
the above formula represents the minimum cosine distance of the ith track and the jth track;
matching the appearance
Figure BDA0003689147140000116
Inputting the result into a preset appearance matching degree threshold function, and judging whether the appearance characteristics of the ith person in the previous frame of monitoring image and the appearance characteristics of the jth person in the next frame of monitoring image are successfully associated, wherein the appearance matching degree threshold function is as follows:
Figure BDA0003689147140000117
wherein, t (2) Indicating a threshold value set for the degree of appearance matching when d (2) (i,j)≤t (2) The appearance characteristics of the ith person in the previous monitoring image are related to the appearance characteristics of the jth person in the next monitoring imageSuccess is achieved;
when the appearance characteristic of the ith person in the previous frame of monitoring image is successfully associated with the motion matching degree and the appearance matching degree of the jth person in the next frame of monitoring image, calculating the comprehensive matching degree c of the appearance characteristic of the ith person in the previous frame of monitoring image and the jth person in the next frame of monitoring image according to the motion matching degree and the appearance matching degree i,j Wherein the degree of matching c is integrated i,j The calculation formula is as follows:
c i,j =λd (1) (i,j)+(1-λ)d (2) (i,j)
wherein, c i,j Setting lambda as a preset hyper-parameter for the comprehensive matching degree between the appearance characteristics of the ith person in the previous monitoring image and the jth person in the next monitoring image according to actual experience, d (1) (i, j) is the degree of motion matching, d (2) (i, j) is the degree of appearance matching;
according to the motion matching degree threshold function and the appearance matching degree threshold function, calculating a comprehensive matching degree threshold function value b between the appearance characteristics of the ith person in the previous frame of monitoring image and the jth person in the next frame of monitoring image i,j And judging whether the appearance characteristic of the ith person in the previous frame of monitoring image is successfully associated with the characteristic of the jth person in the next frame of monitoring image according to the comprehensive matching degree threshold function value, if so, judging that the appearance characteristic of the ith person in the previous frame of monitoring image is successfully matched with the jth person in the next frame of monitoring image, wherein the comprehensive matching degree threshold function is as follows:
Figure BDA0003689147140000121
wherein, only when b i,j The initial matching is considered to be successful when the number is 1.
S66: screening the matching result, and deleting the pair matching with larger cosine distance;
s67: the current cycle detection frame number is larger than the maximum cycle detection frame number A max Then, get the preliminary matching result, noS62 is executed.
And 7: and (4) identifying and planning the dangerous area for the dangerous source at intervals, and returning to the step 4 if the dangerous source is needed to be identified and planned, otherwise, ending the intrusion detection work of the personnel in the dangerous area.
As shown in the attached figure 1, the method firstly needs to acquire monitoring video stream information of key construction units and transmit the acquired video stream to a server, and because the construction site environment is complex, objects such as construction materials, pedestrians, automobiles, construction tools and the like exist, and the real-time requirement of construction safety problems is high, the server needs to detect the pedestrians in the video through a single-step target detection technology to identify the pedestrians and carry out special identification on the pedestrians. In order to make each target uniquely identified in the video stream, the alarm is not triggered repeatedly, and the identified target needs to be tracked and identified. Whether workers enter the dangerous area is finally judged through the technology, and if the workers enter the dangerous area, response processing operation is carried out.
In summary, the method and system for automatically generating monitoring of intrusion of people into construction dangerous areas based on computer vision in the invention comprises the steps of monitoring safe construction of a construction site, dividing dangerous areas of the construction site through a computer vision technology, and then carrying out corresponding measures when people enter the dangerous areas in the areas according to a person target detection algorithm. In this scheme, use computer vision technique to combine together it with building site safety, form a feedback formula system, the surrounding that discerns through the server is gone into the action, and then carries out the management and control to the key construction unit in whole construction site, makes the personnel of construction more can be relieved carry out safe operation, makes the construction safety of whole engineering obtain corresponding guarantee, improves the operational efficiency of construction.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A dangerous area personnel intrusion monitoring method based on machine vision is characterized by comprising the following steps:
acquiring a monitoring image of a target construction area, inputting the monitoring image of the target construction area into a preset dangerous target detection model, and judging whether a dangerous target exists in the target construction area: if the dangerous target exists in the target construction area, acquiring coordinates of the dangerous target, and determining the coordinates of the dangerous area according to the coordinates of the dangerous target;
identifying the personnel coordinates in the monitoring image, comparing the personnel coordinates with the coordinates of the dangerous area, judging whether personnel exist in the dangerous area, and if so, sending an alarm signal to a user.
2. The dangerous area personnel intrusion monitoring method based on machine vision according to claim 1, wherein the dangerous object detection model takes a Yolov4 network as a basic frame, training samples are monitoring images for marking dangerous object types and detection frames, input quantity is the monitoring images, and output quantity is the monitoring images marked with the dangerous object types and the prediction frames.
3. The machine vision-based dangerous area personnel intrusion monitoring method according to claim 2, wherein the monitoring image of the target construction area is input into a preset dangerous target detection model to judge whether a dangerous target exists in the target construction area, specifically:
the first step is as follows: adjusting the image size of the monitoring image to p x p, wherein p is an integral multiple of 32;
the second step is that: dividing the monitored image after size adjustment into grids with the size of s, distributing B prediction frames bounding box needing prediction for each grid, and carrying out a training model through yolov4 to obtain the value of the position, the category information c and the confidence coefficient corresponding to each bounding box;
the position of the prediction box bounding box is marked as (x, y, w, h), wherein x and y represent the coordinates of the central point of the prediction box, and w and h represent the length and width of the prediction box; the confidence definition is calculated as:
Figure FDA0003689147130000011
wherein the content of the first and second substances,
Figure FDA0003689147130000012
represents the confidence of the jth prediction box bounding box of the ith grid, and Pr (object) represents the probability of whether the current prediction box bounding box has dangerous objects or not,
Figure FDA0003689147130000013
representing the IOU ratio between the real detection box and the prediction detection box; each grid also predicts C conditional class probabilities: pr (Class) i |object);
The probability of a certain class appearing in the prediction box and the target degree of fit expression of the prediction box are as follows:
Figure FDA0003689147130000014
wherein, Class i Representing the ith class category;
the third step: normalizing the self-position coordinates (X, Y, W, H) of the prediction frame obtained by calculation in the second step to obtain normalized position coordinates (X, Y, W, H);
the fourth step: and carrying out non-maximum suppression processing on the prediction box with the confidence level meeting the threshold value in the image, and labeling the monitoring image with the dangerous target category and the prediction box thereof.
4. The dangerous area personnel intrusion monitoring method based on machine vision according to claim 3, characterized in that each dangerous target in the target construction area is provided with a corresponding warning mark; the dangerous target detection model identifies each dangerous target by extracting the characteristic identification of each dangerous target and the characteristic of the corresponding warning mark.
5. Machine vision based hazardous area personnel intrusion monitoring method according to claim 4, characterized in that said hazardous target comprises any one or a combination of the following: high voltage electrical equipment, goods or locations, flammable and explosive goods, equipment or locations, and hazardous work areas; acquiring coordinates of a dangerous target, and determining coordinates of a dangerous area according to the coordinates of the dangerous target, wherein the method specifically comprises the following steps:
and extracting coordinates of the dangerous target prediction frame, determining a safe distance according to the category of the dangerous target, and dividing a dangerous area by taking the coordinates of the dangerous target prediction frame as a center and the safe distance as a radius.
6. The machine vision-based hazardous area personnel intrusion monitoring method according to claim 5, wherein the personnel coordinates are compared with the hazardous area coordinates to determine whether personnel are within the hazardous area, comprising the steps of:
and calculating the contact ratio of the personnel and the dangerous area according to the coordinates of the personnel and the dangerous area:
Figure FDA0003689147130000021
wherein R is person For the object detector to detect the coordinate range of the person in the image, R riskarea Automatically demarcating the coordinate range of the hazardous area in the image for the detection of the object detector, J area Indicating the coincidence degree of the two;
and judging whether the personnel are in the dangerous area or not through a threshold function based on the contact ratio of the personnel and the dangerous area:
Figure FDA0003689147130000022
wherein, F area Which indicates whether a person is judged to be in a dangerous area, and t indicates a threshold value for the degree of coincidence of the dangerous area and the person in the image.
7. The machine vision-based hazardous area personnel intrusion monitoring method according to claim 6, further comprising the steps of:
after judging that the personnel are in the dangerous area, tracking the personnel by using a depsort target tracking algorithm:
step 1: allocating a tracking index set Track indexes T ═ 1., N } and a Detection index set Detection indexes D ═ 1.,. M }, and initializing a maximum cycle Detection frame number A max (ii) a Wherein, 1, the. 1, the characteristics of the Mth person are respectively the characteristics of the 1 st person, the Mth person in the next monitoring image;
step 2: calculating a cost matrix C of the characteristics of the ith person in the previous monitoring image and the characteristics of the jth person in the next monitoring image [ C [ ] i,j ]Wherein, i is 1, 1., N, j is 1, 1., M;
and step 3: calculating a cost matrix B [ B ] of a squared Markov distance between the position of a tracking frame average track corresponding to the ith personal characteristic in the previous frame of monitoring image and an actual detection frame bounding box corresponding to the jth personal characteristic in the next frame of monitoring image predicted by Kalman i,j ];
And 4, step 4: judging the threshold value twice, and enabling the average mahalanobis distance between the tracking frame and the detection frame in the cosine cost matrix to be larger than the threshold value
Figure FDA0003689147130000031
Is set to infinity, and the cosine distance is greater than a threshold value
Figure FDA0003689147130000032
Is set to be larger;
and 5: matching the tracking frame and the detection frame by using a Hungarian algorithm, and returning a matching result;
step 6: screening the matching result, and deleting the pair matching with larger cosine distance;
and 7: the current cycle detection frame number is larger than the maximum cycle detection frame number A max And if so, obtaining a preliminary matching result, otherwise, executing the step 2.
8. The dangerous area personnel intrusion monitoring method based on machine vision as claimed in claim 7, wherein the weight of Hungarian algorithm is weighted according to the degree of motion matching and the degree of appearance matching:
calculating the motion matching degree between the motion characteristics of the ith person in the previous monitoring image and the motion characteristics of the jth person in the next monitoring image
Figure FDA0003689147130000033
Wherein the degree of motion matching d (1) The calculation formula is as follows:
Figure FDA0003689147130000034
wherein d is (1) (i, j) 1 represents that the ith person in the previous monitoring image and the jth person in the next monitoring image are wired, 0 represents wireless, and the expression value represents the motion matching degree between the jth detection frame and the ith track;
Figure FDA0003689147130000035
the method is characterized in that the method is an inverse matrix of a covariance matrix of an observation space at the current moment, wherein the track is obtained by prediction of a Kalman filter; d j Is the bounding box of the jth detection box; y is i Is the predicted bounding box of the track at the current time;
matching the motion to a degree
Figure FDA0003689147130000036
Inputting the motion characteristics of the ith person in the previous monitoring image and the motion characteristics of the jth person in the next monitoring image into a preset motion matching degree threshold function, and judging whether the association between the motion characteristics of the ith person in the previous monitoring image and the motion characteristics of the jth person in the next monitoring image is successful or not;
wherein, the threshold function of the motion matching degree is as follows:
Figure FDA0003689147130000037
wherein is made of
Figure FDA0003689147130000041
To determine the initial matching connection, t (1) A threshold value set for the degree of motion matching; when d is (1) (i,j)≤t (1) Representing that the motion characteristic of the ith person in the previous monitoring image is successfully associated with the motion characteristic of the jth person in the next monitoring image;
calculating the appearance matching degree d of the appearance characteristics of the ith person in the previous frame of monitoring image and the appearance characteristics of the jth person in the next frame of monitoring image (2) (i, j) wherein the degree of appearance match d (2) (i, j) is calculated as:
Figure FDA0003689147130000042
wherein r is j In order to describe the factor for the surface characteristics,
Figure FDA0003689147130000043
for storing up-to-date L k The description factor of each track is a function of,
Figure FDA0003689147130000044
for the ith surface characterization factor of the kth trace,
Figure FDA0003689147130000045
the above formula represents the minimum cosine distance of the ith track and the jth track;
matching the appearance
Figure FDA0003689147130000046
Inputting the result into a preset appearance matching degree threshold function, and judging whether the appearance characteristics of the ith person in the previous frame of monitoring image and the appearance characteristics of the jth person in the next frame of monitoring image are successfully associated, wherein the appearance matching degree threshold function is as follows:
Figure FDA0003689147130000047
wherein, t (2) Indicating a threshold value set for the degree of appearance matching when d (2) (i,j)≤t (2) Indicating that the appearance characteristic of the ith person in the previous frame of monitoring image is successfully associated with the appearance characteristic of the jth person in the next frame of monitoring image;
when the appearance characteristics of the ith person in the previous frame of monitoring image are successfully associated with the motion matching degree and the appearance matching degree of the jth person in the next frame of monitoring image, calculating the comprehensive matching degree c of the appearance characteristics of the ith person in the previous frame of monitoring image and the jth person in the next frame of monitoring image according to the motion matching degree and the appearance matching degree i,j Wherein the degree of matching c is integrated i,j The calculation formula is as follows:
c i,j =λd (1) (i,j)+(1-λ)d (2) (i,j)
wherein, c i,j Setting lambda as a preset hyper-parameter for the comprehensive matching degree between the appearance characteristics of the ith person in the previous monitoring image and the jth person in the next monitoring image according to actual experience, d (1) (i, j) is the degree of motion matching, d (2) (i, j) is the degree of appearance matching;
according to the motion matching degree threshold function and the appearance matching degree threshold function, calculating the total between the appearance characteristic of the ith person in the previous monitoring image and the jth person in the next monitoring imageThreshold function value b of matching degree i,j And judging whether the appearance characteristic of the ith person in the previous frame of monitoring image is successfully associated with the characteristic of the jth person in the next frame of monitoring image according to the comprehensive matching degree threshold function value, if so, judging that the appearance characteristic of the ith person in the previous frame of monitoring image is successfully matched with the jth person in the next frame of monitoring image, wherein the comprehensive matching degree threshold function is as follows:
Figure FDA0003689147130000051
wherein, only when b i,j The initial matching is considered to be successful when the value is 1.
9. A machine vision based hazardous area personnel intrusion monitoring method according to claim 1, when it is determined that personnel are within the hazardous area, further comprising the steps of:
pushing the intrusion picture to a manager, and archiving the intrusion picture for later viewing; and a warning device arranged on the dangerous area site sends out warning prompt for driving away the intruder, so that the intruder is prevented from going deep continuously.
10. A computer system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of any one of claims 1 to 9 are performed when the computer program is executed by the processor.
CN202210658001.XA 2022-06-10 2022-06-10 Dangerous area personnel intrusion monitoring method and system based on machine vision Pending CN114973140A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210658001.XA CN114973140A (en) 2022-06-10 2022-06-10 Dangerous area personnel intrusion monitoring method and system based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210658001.XA CN114973140A (en) 2022-06-10 2022-06-10 Dangerous area personnel intrusion monitoring method and system based on machine vision

Publications (1)

Publication Number Publication Date
CN114973140A true CN114973140A (en) 2022-08-30

Family

ID=82962351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210658001.XA Pending CN114973140A (en) 2022-06-10 2022-06-10 Dangerous area personnel intrusion monitoring method and system based on machine vision

Country Status (1)

Country Link
CN (1) CN114973140A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782675A (en) * 2022-03-31 2022-07-22 江苏预立新能源科技有限公司 Dynamic item pricing method and system in safety technical service field
CN115190277A (en) * 2022-09-08 2022-10-14 中达安股份有限公司 Safety monitoring method, device and equipment for construction area and storage medium
CN116206255A (en) * 2023-01-06 2023-06-02 广州纬纶信息科技有限公司 Dangerous area personnel monitoring method and device based on machine vision
CN116311361A (en) * 2023-03-02 2023-06-23 北京化工大学 Dangerous source indoor staff positioning method based on pixel-level labeling
CN116797031A (en) * 2023-08-25 2023-09-22 深圳市易图资讯股份有限公司 Safety production management method and system based on data acquisition
CN116977920A (en) * 2023-06-28 2023-10-31 三峡科技有限责任公司 Critical protection method for multi-zone type multi-reasoning early warning mechanism
CN117549330A (en) * 2024-01-11 2024-02-13 四川省铁路建设有限公司 Construction safety monitoring robot system and control method
CN117557201A (en) * 2024-01-12 2024-02-13 国网山东省电力公司菏泽供电公司 Intelligent warehouse safety management system and method based on artificial intelligence

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782675B (en) * 2022-03-31 2022-11-25 江苏预立新能源科技有限公司 Dynamic item pricing method and system in safety technical service field
CN114782675A (en) * 2022-03-31 2022-07-22 江苏预立新能源科技有限公司 Dynamic item pricing method and system in safety technical service field
CN115190277A (en) * 2022-09-08 2022-10-14 中达安股份有限公司 Safety monitoring method, device and equipment for construction area and storage medium
CN116206255B (en) * 2023-01-06 2024-02-20 广州纬纶信息科技有限公司 Dangerous area personnel monitoring method and device based on machine vision
CN116206255A (en) * 2023-01-06 2023-06-02 广州纬纶信息科技有限公司 Dangerous area personnel monitoring method and device based on machine vision
CN116311361A (en) * 2023-03-02 2023-06-23 北京化工大学 Dangerous source indoor staff positioning method based on pixel-level labeling
CN116311361B (en) * 2023-03-02 2023-09-15 北京化工大学 Dangerous source indoor staff positioning method based on pixel-level labeling
CN116977920B (en) * 2023-06-28 2024-04-12 三峡科技有限责任公司 Critical protection method for multi-zone type multi-reasoning early warning mechanism
CN116977920A (en) * 2023-06-28 2023-10-31 三峡科技有限责任公司 Critical protection method for multi-zone type multi-reasoning early warning mechanism
CN116797031A (en) * 2023-08-25 2023-09-22 深圳市易图资讯股份有限公司 Safety production management method and system based on data acquisition
CN116797031B (en) * 2023-08-25 2023-10-31 深圳市易图资讯股份有限公司 Safety production management method and system based on data acquisition
CN117549330A (en) * 2024-01-11 2024-02-13 四川省铁路建设有限公司 Construction safety monitoring robot system and control method
CN117549330B (en) * 2024-01-11 2024-03-22 四川省铁路建设有限公司 Construction safety monitoring robot system and control method
CN117557201A (en) * 2024-01-12 2024-02-13 国网山东省电力公司菏泽供电公司 Intelligent warehouse safety management system and method based on artificial intelligence
CN117557201B (en) * 2024-01-12 2024-04-12 国网山东省电力公司菏泽供电公司 Intelligent warehouse safety management system and method based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN114973140A (en) Dangerous area personnel intrusion monitoring method and system based on machine vision
CN111144291B (en) Video monitoring area personnel intrusion discrimination method and device based on target detection
JP2012518846A (en) System and method for predicting abnormal behavior
CN209543514U (en) Monitoring and alarm system based on recognition of face
CN113743256B (en) Intelligent early warning method and device for site safety
CN109614906A (en) A kind of security system and security alarm method based on deep learning
CN207909318U (en) Article leaves intelligent detecting prewarning system in a kind of high risk zone
CN117035419B (en) Intelligent management system and method for enterprise project implementation
CN110674761A (en) Regional behavior early warning method and system
CN114358980A (en) Intelligent community property management system and method based on Internet of things
CN111553305B (en) System and method for identifying illegal videos
CN112580470A (en) City visual perception method and device, electronic equipment and storage medium
CN113506416A (en) Engineering abnormity early warning method and system based on intelligent visual analysis
CN113191273A (en) Oil field well site video target detection and identification method and system based on neural network
CN210222962U (en) Intelligent electronic fence system
CN115457449A (en) Early warning system based on AI video analysis and monitoring security protection
CN112885013A (en) Monitoring and early warning method and device and readable storage medium
CN111860187A (en) High-precision worn mask identification method and system
CN115567690A (en) Intelligent monitoring system capable of automatically identifying dangerous points of field operation
CN116862244A (en) Industrial field vision AI analysis and safety pre-warning system and method
CN114329106A (en) Big data analysis system and method for community
CN114677640A (en) Intelligent construction site safety monitoring system and method based on machine vision
CN114067396A (en) Vision learning-based digital management system and method for live-in project field test
CN110533889B (en) Sensitive area electronic equipment monitoring and positioning device and method
CN117423049A (en) Method and system for tracking abnormal event by real-time video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination