CN114495191A - Combined safety helmet wearing real-time detection method based on end side - Google Patents

Combined safety helmet wearing real-time detection method based on end side Download PDF

Info

Publication number
CN114495191A
CN114495191A CN202111446012.3A CN202111446012A CN114495191A CN 114495191 A CN114495191 A CN 114495191A CN 202111446012 A CN202111446012 A CN 202111446012A CN 114495191 A CN114495191 A CN 114495191A
Authority
CN
China
Prior art keywords
face
wearing
safety helmet
detection
similarity transformation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111446012.3A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Eeasy Electronic Tech Co ltd
Original Assignee
Zhuhai Eeasy Electronic Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Eeasy Electronic Tech Co ltd filed Critical Zhuhai Eeasy Electronic Tech Co ltd
Priority to CN202111446012.3A priority Critical patent/CN114495191A/en
Publication of CN114495191A publication Critical patent/CN114495191A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a combined type safety helmet wearing real-time detection method based on an end side, which is based on the fact that a combined type convolution neural network structure is adopted to realize real-time detection and identification of safety helmet wearing through combination of multiple modules; the combined convolutional neural network structure adopts a form of combination of front and back association of a plurality of network modules, the face detection module, the image similarity transformation module and the safety helmet detection module adopt face detection, face area calibration and a multi-classification method for wearing safety helmets, the combined convolutional neural network structure has high accuracy and low false detection rate, and the model can realize real-time detection on the end side. The method can respectively train and test each module, thereby reducing the training complexity and the running time; the models are loosely coupled and can be freely replaced according to requirements. And the effectiveness and higher accuracy of the method are verified in the self-built data set and practical application.

Description

Combined safety helmet wearing real-time detection method based on end side
Technical Field
The invention relates to the field of computer vision, in particular to a combined safety helmet wearing real-time detection method based on an end side.
Background
The function of the helmet in the workplace is self-evident. The national "safety production law" stipulates that "production and management units must provide workers with labor protection articles meeting national standards or industrial standards, and supervise and educate workers to wear and use according to the use rules"; meanwhile, relevant standards are also established in the industry to guarantee the safety production of employees, such as: the JGJ59-99 standard for safety inspection of building construction stipulates that a safety helmet must be worn when entering a construction site; the operating specifications of various industries also make strict regulations on the wearing of safety helmets.
However, it can be seen that even though the security management is strengthened by the laws and regulations in the state and the enterprise under the double control, not everyone can voluntarily comply with the regulations, and even some people think that the people only stay in the construction site for several minutes, there is no danger when the people don't wear the safety helmet, and the tragedy often happens at the moment as some passengers do not like wearing the safety belt. From the accidents we find the key points to solve the problem are: timely and accurately sends out warning to workers who do not wear the safety helmet. In the management, the violation phenomenon is stopped by patrolling or watching videos by supervisors, but the problems of multiple points, wide range, insufficient manpower and material resources and careless omission occur.
The existing detection for wearing the safety helmet is mostly a detection network which is very large and needs to be specially marked. This presents two problems, one is that a separate detection network is required for headgear wear requiring end-side computational support; and secondly, the labeling support of the data set is needed.
Disclosure of Invention
In order to solve the technical problems existing in the background technology, the invention provides a combined safety helmet wearing real-time detection method based on an end side.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a combined type safety helmet wearing real-time detection method based on an end side is characterized in that real-time detection and identification of safety helmet wearing are achieved through combination of multiple modules based on the adoption of a combined type convolutional neural network structure;
the combined convolutional neural network structure adopts a form of combination of front and back association of a plurality of network modules, and comprises a face detection module, an image similarity transformation module and a safety helmet detection module;
the face detection module is used for detecting a face in the input image and outputting face key point information;
the image similarity transformation module takes the face key point information output by the face detection module as input, and obtains a face region picture through similarity transformation as output;
the safety helmet detection module takes the human face region picture output by the image similarity transformation module as input, classifies the human face region picture and outputs a safety helmet wearing detection result.
Furthermore, the training data set, the training and the index of the face detection module, the image similarity transformation module and the helmet detection module are independent from each other.
Further, the safety helmet detection module adopts a pre-trained ResNet18 model, and the training sample picture classification includes four categories: 1. safety helmet, 2. other hard helmet, 3. common hat or other partial shelter, 4. original head.
Furthermore, the face detection module adopts a RetinaFace algorithm, and the output face key point information comprises face frames, eyes, a nose and coordinate information of two mouth corners.
Further, the image similarity transformation module performs similarity transformation to obtain 224 × 224 face region pictures.
Compared with the prior art, the invention has the beneficial effects that:
the invention adopts the modularized combination detection network, and the detection and the classification combination are carried out, thereby being beneficial to being embedded into the service logic of the existing face detection network, having small classification network model, high running speed at the end side and small influence and cost on the existing system; the network adopts a combined structure of RetinaFace + SimiarityTransform + ResNet18, realizes the combination interfacing between models through the similarity transformation of key points of human faces to pictures, can be trained respectively in training, even can adopt different models optimized according to specific hardware, accelerates the reasoning speed, and can realize the real-time running speed under the condition of keeping high accuracy; the requirement of model training on hardware is low, and each person can use 1080Ti or 2080TiGPU to train a super-fast and accurate target detection classifier; the method realizes the wearing detection problem of the safety helmet by a multi-classification method, and the AUC index is improved compared with a two-classification method.
Drawings
Fig. 1 is a general framework flowchart of a method for real-time detection of wearing of an end-side-based combined safety helmet according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of helmet fit data set categories used in embodiments of the present invention;
fig. 3 is a face region calibration diagram of similarity transformation.
Detailed Description
Example (b):
the technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Multiple detection networks cannot be run in real time due to end-to-end computational power limitations. The solution scheme for detecting the wearing of the safety helmet is realized through the existing face detection network and the combined classification network for wearing the safety helmet.
Referring to fig. 1, the combined real-time detection method for wearing a safety helmet based on an end side provided in this embodiment is based on the fact that real-time detection and identification of wearing a safety helmet are achieved through combination of multiple modules by using a combined convolutional neural network structure; the combined convolutional neural network structure adopts a form of combination of front and back association of a plurality of network modules, and comprises a face detection module, an image similarity transformation module and a safety helmet detection module; the training data set, training and metrics of the modules at each stage are independent of each other.
The face detection module is used for detecting a face in an input image and outputting face key point information comprising face frames, eyes, a nose and coordinate information of two mouth corners, and the face detection module adopts a Retina face network to detect the face and has high recall rate.
The image similarity transformation module takes the face key point information output by the face detection module as input, and performs similarity transformation to five points (80.4419133.54445), (133.2977133.2521), (107.0378163.6049), (85.32395194.54825) and (129.09485194.30615) of 224 × 224 to obtain a face region picture as output. The image processing method adopting similarity transformation reduces the influence of different perspective perspectives and positions on network identification
The safety helmet detection module takes the face region picture output by the image similarity transformation module as input, classifies the face region picture, and outputs a safety helmet wearing detection result, wherein the four classifications include four classifications of wearing a safety helmet, wearing a hard helmet, wearing a hat or other shelters, and no sheltering face. Compared with simple two-classification judgment, the method has low false recognition rate
That is to say, the combined network method adopted by the method obtains the area information and the key point information of the human face through the existing human face detection network, and the information carries out similarity transformation on the original picture to obtain the input of the classification network, thereby realizing the wearing detection of the personal safety helmet. The method can utilize the model at the existing end side to the maximum extent, only embeds one graph similar module and one small classification network, and forms a pipeline type graph processing flow in series, thereby saving calculation force while not changing the existing architecture.
The invention adopts the modularized combination detection network, and the detection and the classification combination are carried out, thereby being beneficial to being embedded into the service logic of the existing face detection network, having small classification network model, high running speed at the end side and small influence and cost on the existing system; the network adopts a combined structure of RetinaFace + SimiarityTransform + ResNet18, realizes the combination interfacing between models through the similarity transformation of key points of human faces to pictures, can be trained respectively in training, and even can adopt different models optimized according to specific hardware to accelerate the reasoning speed; the requirement of model training on hardware is low, and each person can use 1080Ti or 2080TiGPU to train a super-fast and accurate target detection classifier; the method realizes the wearing detection problem of the safety helmet by a multi-classification method, and the AUC index is improved compared with a two-classification method.
Specifically, in this implementation, the face detection module adopts a retinaFace algorithm. The RetinaFace algorithm consists of four parts: the background is used for extracting image features; FPN, fusing the multi-scale characteristic graphs; SSH and the like improve the detection of the small face by introducing context modeling in the feature map; the method only adopts two prediction branches, namely existing branch and extra-super, which are respectively used for outputting probability value + frame regression and face key point information.
Figure BDA0003384860620000041
First part LclsIs a classification loss, the second part LboxIs the face frame regression loss, the third part LptsThe regression loss of key points of the human Face is obtained, and the original edition Retina Face is cut from the fourth part, namely dense regression loss, and is not used for the moment; because the original pre-training model has a very good effect on the data set, fine adjustment is only needed according to the perspective relation of the focal length of the camera. Training of 50epoch is carried out in a self-built labeled ten-thousand people library. Reference article for specific training method [1]A training method of Single-stage Dense Face localization in the Wild.
The similarity transformation module adopts a Least square method, and refers to a formula [2] in the article, "Least-squares estimation of transformation parameters between two points patterns", Shinji Umeyama, PAMI 1991, DOI:10.1109/34.88573", and the similarity transformation matrix T consists of proportion c, rotation R and translation T.
Figure BDA0003384860620000042
Wherein R, t, c is determined by the following formula:
R=USVT (1)
t=μy-cRμx (2)
Figure BDA0003384860620000043
in the formula, U, V is an eigenvector of singular value decomposition value of a calibration matrix before and after calibration, D is an eigenvalue diagonal matrix after eigenvalue sorting, S is a unit diagonal matrix, and muxAnd muyThe coordinate mean value before and after calibration, tr is the rank calculation
The specific method is to calculate a similarity matrix T between a face key point set and a target key point set output by the Retina face network, and then obtain 224 × 224 input pictures of the classification network by adopting a similarity transformation matrix according to the original pictures.
The safety helmet classification data set of the safety helmet wearing classification module adopts a self-built safety helmet data set to be divided into a training set 'train' 22190 pictures, and a verification set 'val' 2472 pictures. Sample picture classifications include four categories: 1. safety helmet, 2. other hard helmet, 3. common hat or other partial shelter, 4. original head. The benefit of such classification is that the misrecognition rate is reduced while the accuracy is guaranteed. For other classes of non-safety helmets, a number of other hard helmets and samples of wearing hats have been added.
If the two-classification method is adopted, namely the helmet is worn or not worn, the phenomenon of false identification occurs, and the safety helmet is identified to be worn if the head is shielded or a hat is worn. This adds two categories, category 2 other hard helmets: such as motorcycle helmets, bicycle helmets, sports helmets and the like, and samples such as various hats, sunglasses, head shelters by objects and the like are added according to the classification 3 of hats or sheltering categories. By adding these counter samples of the countermeasure property into the data set, the robustness of the network model in the actual application scene is greatly improved, and the false recognition probability is reduced.
The classification loss adopts multi-classification cross entropy loss,
Figure BDA0003384860620000051
n is the number of samples, M is the number of categories, y is the network prediction value, and p is the sample label.
Because the data input by the safety helmet wearing classification module are all subjected to similarity transformation calibration, the data amplification strategy only adopts mirror image inversion and color interference.
The process is further illustrated below with reference to an experimental example:
(1) data set used
Two data sets were used:
the method comprises the following steps that 1, a face detection data set comprises pictures of a real scene collected by an end-side camera, 10000 pictures and annotation information of a face frame and 5 key points;
2, wearing a detection data set on the safety helmet, wherein the data set covers head pictures of people in various scenes, including various scenes such as the safety helmet, a riding helmet, a baseball cap, a warm-keeping cap, sunglasses, a mask, partial face shielding, a front face, a side face and partial loss;
(2) description of the experiments
The experiment adopts a pre-trained ResNet18 model, and the output is 4 types, safety helmet, hat or other head shelters, and natural human face; because the obtained data are all calibrated by similarity transformation, the data augmentation strategy only adopts mirror image inversion and color interference.
A pitorch deep learning framework is adopted, an SGD-with-Momentum optimization strategy is utilized to train 60 rounds, the Momentum parameter is set to be beta equal to 0.900, the initial learning rate is set to be 0.001, the learning rate is dynamically adjusted, and each 5 batches of the learning rate are 1/10. The training batch was 16 and the input picture size was 224 x 224. All experiments were performed on a machine containing 2 NVIDIA2080Ti GPUs.
The number of the samples in the self-built data set is different greatly, wherein the training set comprises 7597 samples in category 0, 1740 samples in category 1, 7498 samples in category 2 and 5355 samples in category 3. In order to realize the balance among the sample numbers, a WeightedRandomSampler in the pytorch is adopted to carry out the reciprocal weighting of the sample number of each type of sample, and a sampling strategy with a return is adopted to realize the equal sampling probability among the types.
(3) Results of the experiment
To evaluate the effectiveness of the present invention, tests were performed on the validation set of data sets mentioned above.
In the helmet fit detection classification, the test accuracy on the verification set.
Training 60 batches on the original data set, the training time is 39 minutes and 59 seconds, and the accuracy is 0.971683
Sample balance training is adopted for 60 batches, the training time is 39 minutes and 50 seconds, and the accuracy is 0.973301
In conclusion, the method can be used for respectively training and testing each module, so that the training complexity and the running time are reduced; the models are loosely coupled and can be freely replaced according to requirements. And the effectiveness and higher accuracy of the method are verified in the self-built data set and practical application. Compared with the prior art, the method has the following technical advantages:
(1) the invention adopts a combined detection network, detection and classification do not need to be carried out step by step, and the invention is beneficial to the combination of functions at the end side.
(2) The face area picture is calibrated and cut by adopting the similarity transformation to obtain uniform face area information, so that the accuracy of a classification network is improved
(3) The wearing of the safety helmet is classified more, so that the false recognition probability is greatly reduced
(4) The requirement of the model on hardware is low, each person can use 1080Ti or 2080TiGPU to train respectively, and particularly, the formula classification network is a very small resnet18 network, so that end-side reasoning is facilitated.
The above embodiments are only for illustrating the technical concept and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the contents of the present invention and implement the present invention accordingly, and not to limit the protection scope of the present invention accordingly. All equivalent changes or modifications made in accordance with the spirit of the present disclosure are intended to be covered by the scope of the present disclosure.

Claims (10)

1. The combined type safety helmet wearing real-time detection method based on the end side is characterized in that the method is based on the fact that a combined type convolutional neural network structure is adopted to achieve real-time detection and identification of safety helmet wearing through combination of multiple modules;
the combined convolutional neural network structure adopts a form of combination of front and back association of a plurality of network modules, and comprises a face detection module, an image similarity transformation module and a safety helmet detection module;
the face detection module is used for detecting a face in the input image and outputting face key point information;
the image similarity transformation module takes the face key point information output by the face detection module as input, and obtains a face region picture through similarity transformation as output;
the safety helmet detection module takes the human face region picture output by the image similarity transformation module as input, classifies the human face region picture and outputs a safety helmet wearing detection result.
2. The end-side based combined real-time headgear wearing detection method of claim 1, wherein the training data sets, training and indicators of the face detection module, image similarity transformation module and headgear detection module are independent of each other.
3. The end-based, combined real-time headgear wear detection method of claim 1, wherein the headgear detection module employs a ResNet18 model.
4. The end-side based combined real-time detection method for wearing safety helmets according to claim 3, wherein the training sample picture classification of the ResNet18 model comprises four categories: 1. safety helmet, 2. other hard helmet, 3. common hat or other partial shelter, 4. original head.
5. The end-side based combined real-time detection method for headgear wear of claim 1, wherein the face detection module employs a retinaFace algorithm.
6. The end-side-based combined real-time detection method for wearing safety helmets according to claim 5, wherein the face key point information outputted by the face detection module comprises face frame and coordinate information of eyes, nose and two mouth corners.
7. The end-side-based combined real-time detection method for wearing safety helmets according to claim 1, wherein the image similarity transformation module performs similarity transformation to obtain 224 x 224 pictures of the face area.
8. The end-side based combined real-time headgear wearing detection method of claim 5, wherein the image similarity transformation module employs a least squares method.
9. The end-side based combined real-time headgear wearing detection method of claim 8, wherein the similarity transformation matrix T of the image similarity transformation module consists of a scale c, a rotation R, and a translation T:
Figure RE-FDA0003578426190000011
wherein R, t, c is determined by the following formula:
R=USVT (1)
t=μy-cRμx (2)
Figure RE-FDA0003578426190000021
in the formula, U, V is an eigenvector of singular value decomposition value of a calibration matrix before and after calibration, D is an eigenvalue diagonal matrix after eigenvalue sorting, S is a unit diagonal matrix, and muxAnd muyTo average the coordinates before and after calibration, trTo evaluate the rank.
10. The end-side based combined real-time headgear wearing detection method of claim 4, wherein the classification loss of the headgear detection module employs multi-classification cross-entropy loss,
Figure RE-FDA0003578426190000022
n is the number of samples, M is the number of categories, y is the network predicted value, and p is the sample label.
CN202111446012.3A 2021-11-30 2021-11-30 Combined safety helmet wearing real-time detection method based on end side Pending CN114495191A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111446012.3A CN114495191A (en) 2021-11-30 2021-11-30 Combined safety helmet wearing real-time detection method based on end side

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111446012.3A CN114495191A (en) 2021-11-30 2021-11-30 Combined safety helmet wearing real-time detection method based on end side

Publications (1)

Publication Number Publication Date
CN114495191A true CN114495191A (en) 2022-05-13

Family

ID=81492977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111446012.3A Pending CN114495191A (en) 2021-11-30 2021-11-30 Combined safety helmet wearing real-time detection method based on end side

Country Status (1)

Country Link
CN (1) CN114495191A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393905A (en) * 2022-11-01 2022-11-25 合肥中科类脑智能技术有限公司 Helmet wearing detection method based on attitude correction

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017141223A1 (en) * 2016-02-20 2017-08-24 Vats Nitin Generating a video using a video and user image or video
CN107563281A (en) * 2017-07-24 2018-01-09 南京邮电大学 A kind of construction site personal security hidden danger monitoring method based on deep learning
CN110781833A (en) * 2019-10-28 2020-02-11 杭州宇泛智能科技有限公司 Authentication method and device and electronic equipment
CN110991315A (en) * 2019-11-28 2020-04-10 江苏电力信息技术有限公司 Method for detecting wearing state of safety helmet in real time based on deep learning
CN113516082A (en) * 2021-07-19 2021-10-19 曙光信息产业(北京)有限公司 Detection method and device of safety helmet, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017141223A1 (en) * 2016-02-20 2017-08-24 Vats Nitin Generating a video using a video and user image or video
CN107563281A (en) * 2017-07-24 2018-01-09 南京邮电大学 A kind of construction site personal security hidden danger monitoring method based on deep learning
CN110781833A (en) * 2019-10-28 2020-02-11 杭州宇泛智能科技有限公司 Authentication method and device and electronic equipment
CN110991315A (en) * 2019-11-28 2020-04-10 江苏电力信息技术有限公司 Method for detecting wearing state of safety helmet in real time based on deep learning
CN113516082A (en) * 2021-07-19 2021-10-19 曙光信息产业(北京)有限公司 Detection method and device of safety helmet, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHINJI UMEYAMA: ""Least-squares_estimation_of_transformation_parameters_between_two_point_patterns"", PAMI, vol. 13, 30 April 1991 (1991-04-30), pages 376 - 380, XP002317333, DOI: 10.1109/34.88573 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393905A (en) * 2022-11-01 2022-11-25 合肥中科类脑智能技术有限公司 Helmet wearing detection method based on attitude correction

Similar Documents

Publication Publication Date Title
CN110119686B (en) Safety helmet real-time detection method based on convolutional neural network
CN105868689B (en) A kind of face occlusion detection method based on concatenated convolutional neural network
Chaudhari et al. Face detection using viola jones algorithm and neural networks
CN111539276B (en) Method for detecting safety helmet in real time in power scene
CN113516076A (en) Improved lightweight YOLO v4 safety protection detection method based on attention mechanism
CN104951773A (en) Real-time face recognizing and monitoring system
CN105046219A (en) Face identification system
CN111062303A (en) Image processing method, system and computer storage medium
CN114419659A (en) Method for detecting wearing of safety helmet in complex scene
CN116092115A (en) Real-time lightweight construction personnel safety dressing detection method
CN110210382A (en) A kind of face method for detecting fatigue driving and device based on space-time characteristic identification
CN114495191A (en) Combined safety helmet wearing real-time detection method based on end side
Pramita et al. Mask wearing classification using CNN
Yi et al. Research on Helmet wearing detection in multiple scenarios based on YOLOv5
CN112183532A (en) Safety helmet identification method based on weak supervision collaborative learning algorithm and storage medium
CN111694980A (en) Robust family child learning state visual supervision method and device
CN115829324A (en) Personnel safety risk silent monitoring method
CN116385962A (en) Personnel monitoring system in corridor based on machine vision and method thereof
CN104318267A (en) System for automatically recognizing purity of Tibetan mastiff puppy
CN114997279A (en) Construction worker dangerous area intrusion detection method based on improved Yolov5 model
Deng et al. Multi-view face detection based on adaboost and skin color
CN112651371A (en) Dressing security detection method and device, storage medium and computer equipment
Yang et al. Mask wearing specification detection based on cascaded convolutional neural network
Yang et al. Research on application of object detection based on yolov5 in construction site
Yao et al. Behavior recognition of substation maintenance personnel based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination