CN113139426A - Detection method and device for wearing safety helmet, storage medium and terminal - Google Patents

Detection method and device for wearing safety helmet, storage medium and terminal Download PDF

Info

Publication number
CN113139426A
CN113139426A CN202110267693.0A CN202110267693A CN113139426A CN 113139426 A CN113139426 A CN 113139426A CN 202110267693 A CN202110267693 A CN 202110267693A CN 113139426 A CN113139426 A CN 113139426A
Authority
CN
China
Prior art keywords
safety helmet
head
wearing
detection model
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110267693.0A
Other languages
Chinese (zh)
Inventor
张宇廷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Smart Video Security Innovation Center Co Ltd
Original Assignee
Zhejiang Smart Video Security Innovation Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Smart Video Security Innovation Center Co Ltd filed Critical Zhejiang Smart Video Security Innovation Center Co Ltd
Priority to CN202110267693.0A priority Critical patent/CN113139426A/en
Publication of CN113139426A publication Critical patent/CN113139426A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Helmets And Other Head Coverings (AREA)

Abstract

The invention discloses a detection method, a detection device, a storage medium and a terminal for wearing a safety helmet, wherein the method comprises the following steps: receiving a target image sent by image acquisition equipment in real time; inputting a target image into a program which runs on a current chip, wherein the program comprises a wk file generated based on a pre-trained helmet detection model; the safety helmet detection model is generated based on training of a training data set, and the training data set comprises three types of detection targets; the three detection targets are the head of a person wearing a safety helmet, the head of a person not wearing a safety helmet and a pedestrian respectively; the safety helmet detection model is a YOLOv4 model, and the YOLOv4 model is generated by modifying an activation function and cutting a channel; outputting the type of each detected object in the target image and the position and the size of each object; whether the pedestrian wears the crash helmet is determined based on the type of each object, the position and the size of each object. Therefore, by adopting the embodiment of the application, the accuracy of identifying whether the worker wears the safety helmet can be improved.

Description

Detection method and device for wearing safety helmet, storage medium and terminal
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a detection method and device for a wearable safety helmet, a storage medium and a terminal.
Background
It is a very important thing in construction and factory work for workers to wear safety helmets. If a worker is struck during work without wearing a helmet, a strong impact force is applied to the head, thereby causing a serious accident. At present, workers in various factories and workshops have low consciousness of wearing safety helmets, so that the situation that workers take off the safety helmets or do not wear the safety helmets is often found in actual operation, enterprises need to delegate supervisors to supervise the situation, and supervision efficiency is low by means of manpower. With the rapid development of artificial intelligence, more and more deep learning algorithms are applied to the fields of intelligent security supervision, intelligent security and the like, so that safety production can be guaranteed by utilizing a deep learning technology to detect safety helmets.
Existing headgear detection algorithms can be broadly divided into single-stage and two-stage algorithms. The single-stage algorithm directly detects the head of a person wearing the safety helmet and the head of a person not wearing the safety helmet in the picture. Because the data of the existing safety helmet is less, and the proportion of the human head in the picture is generally smaller, and the texture and the structural information of the human head are not unique enough, the false detection is easily caused: i.e. the algorithm easily detects background areas that do not contain people as targets. The problem of false detection can be relieved by adopting a double-stage algorithm, the double-stage algorithm firstly detects people in the picture and then detects whether the helmet is worn or not within the range of the detected people. Because the two-stage algorithm needs to load the pedestrian detection model and the safety helmet detection model respectively, more storage space is occupied, and the images need to be judged through the two algorithms, so that the timeliness of the algorithms is influenced.
Disclosure of Invention
The embodiment of the application provides a detection method and device for wearing a safety helmet, a storage medium and a terminal. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
In a first aspect, an embodiment of the present application provides a detection method for wearing a safety helmet, including:
receiving a target image sent by image acquisition equipment in real time;
inputting a target image into a program which runs on a current chip, wherein the program comprises a wk file generated based on a pre-trained helmet detection model;
the safety helmet detection model is generated based on training of a training data set, and the training data set comprises three types of detection targets; the three detection targets are the head of a person wearing a safety helmet, the head of a person not wearing a safety helmet and a pedestrian respectively; the safety helmet detection model is a YOLOv4 model, and the YOLOv4 model is generated by modifying an activation function and cutting a channel;
outputting the type of each detected object in the target image and the position and the size of each object; wherein, each object comprises a head of a person wearing the safety helmet, a head of a person not wearing the safety helmet and a pedestrian;
whether the pedestrian wears the crash helmet is determined based on the type of each object, the position and the size of each object.
Optionally, determining whether the pedestrian wears the crash helmet based on the type of each object, the position of each object, and the size of each object, includes:
judging whether the head of the person wearing the safety helmet or the head of the person not wearing the safety helmet is in a pedestrian area or not, and generating a judgment result;
when the identification of the judgment result is true, determining that the pedestrian wears the safety helmet;
or
And when the identification of the judgment result is false, determining that the pedestrian does not wear the safety helmet.
Optionally, judge whether the head of the person wearing the crash helmet or the head of the person not wearing the crash helmet is in the pedestrian region, generate a judgment result, including:
calculating an area of the head of the wearer or the head of the non-wearer based on the size;
calculating the intersection area of the head of the person wearing the safety helmet or the head of the person not wearing the safety helmet and the pedestrian area;
determining a ratio of the intersection area to the area of the head region of the wearer as a first calculation result;
when the first calculation result is larger than a preset threshold value, generating a judgment result marked as true, otherwise, not judging;
alternatively, the first and second electrodes may be,
determining a ratio of the intersection area to the area of the head region of the non-worn safety helmet as a second calculation result;
and when the second judgment result is larger than a preset threshold value, generating a judgment result marked as false, otherwise, not judging.
Optionally, the generating a pre-trained crash helmet detection model according to the following steps includes:
collecting a training data set; wherein the training number data set comprises three types of detection targets; the three detection targets are the head of a person wearing a safety helmet, the head of a person not wearing a safety helmet and a pedestrian respectively;
creating a safety helmet detection model by adopting a YOLOv4 model;
searching a Mish activation function in a safety helmet detection model trained in advance;
replacing the Mish activation function with a Leaky Relu activation function to generate a safety helmet detection model after replacing the function;
inputting the training data set into the safety helmet detection model after the replacement function for training, and counting the iteration times;
when the iteration times reach the preset iteration times, generating a trained safety helmet detection model;
and determining the trained safety helmet detection model as a pre-trained safety helmet detection model.
Optionally, the method further comprises:
channel cutting is carried out on a safety helmet detection model trained in advance by adopting a compression algorithm, and a compressed safety helmet detection model is generated;
generating a wk file based on the compressed safety helmet detection model;
deploying a program containing the wk file into a chip; wherein the chip is a Haisi chip.
Optionally, generating a wk file based on the compressed helmet detection model includes:
converting a pre-trained safety helmet detection model into a file in a califfemod format;
converting the files in the califfemod format into wk files;
and carrying out quantization processing on the wk file to generate the wk file.
Optionally, when the iteration number reaches the preset iteration number, generating a trained helmet detection model, including:
and when the iteration times do not reach the preset iteration times, continuing to execute the step of inputting the training data set into the safety helmet detection model after the replacement function for training until the iteration times reach the preset iteration times, and generating the trained safety helmet detection model.
In a second aspect, the present application provides a detection apparatus for wearing a safety helmet, the apparatus including:
the image receiving module is used for receiving a target image sent by the image acquisition equipment in real time;
the image input module is used for inputting a target image into a program running on a current chip, wherein the program comprises a wk file generated based on a pre-trained safety helmet detection model;
the safety helmet detection model is generated based on training of a training data set, and the training data set comprises three types of detection targets; the three detection targets are the head of a person wearing a safety helmet, the head of a person not wearing a safety helmet and a pedestrian respectively; the safety helmet detection model is a YOLOv4 model, and the YOLOv4 model is generated by modifying an activation function and cutting a channel;
the parameter output module is used for outputting the types of the detected objects in the target image and the positions and the sizes of the objects; wherein, each object comprises a head of a person wearing the safety helmet, a head of a person not wearing the safety helmet and a pedestrian;
and the judging module is used for judging whether the pedestrian wears the safety helmet or not based on the type of each object and the position and the size of each object.
In a third aspect, embodiments of the present application provide a computer storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, an embodiment of the present application provides a terminal, which may include: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the embodiment of the application, a detection device for wearing a safety helmet firstly receives a target image sent by an image acquisition device in real time, and then inputs the target image into a program running on a current chip, wherein the program comprises a wk file generated based on a safety helmet detection model trained in advance, the safety helmet detection model is generated based on a training data set, and the training data set comprises three types of detection targets; the three detection targets are the head of a person wearing a safety helmet, the head of a person not wearing a safety helmet and a pedestrian respectively; the safety helmet detection model is a YOLOv4 model, and the YOLOv4 model is generated after an activation function is modified and channel clipping is carried out. And secondly, outputting the type of each object, the position and the size of each object detected in the target image, wherein each object comprises a head of a person wearing a safety helmet, a head of a person not wearing the safety helmet and a pedestrian, and finally judging whether the pedestrian wears the safety helmet or not based on the type of each object and the position and the size of each object. Because the data set of the pedestrian category is added into the trained data set, the false detection of the non-living safety helmet is solved, and the model in the application can detect whether the worker wears the safety helmet or not in real time while ensuring high-precision identification by modifying the activation function and cutting through the channel.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic flow chart illustrating a method for detecting wearing of a crash helmet according to an embodiment of the present disclosure;
FIG. 2 is a labeled diagram of a terminal after detection of wearing a safety helmet according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart illustrating training of a crash helmet detection model in a detection method for wearing a crash helmet according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a detection device for wearing a safety helmet according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The following description and the drawings sufficiently illustrate specific embodiments of the invention to enable those skilled in the art to practice them.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
In the description of the present invention, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
In the technical scheme provided by the application, the data set of the pedestrian category is added into the training data set, so that the false detection of the safety helmet without the living body is solved, and the model in the application is cut by modifying the activation function and passing through the channel, so that whether a worker wears the safety helmet or not can be detected in real time while high-precision identification is guaranteed, and the detailed description is carried out by adopting an exemplary embodiment.
The detection method of the wearable safety helmet provided by the embodiment of the application will be described in detail below with reference to fig. 1 to 3. The method may be implemented in dependence on a computer program, operable on a detection device for a wearable headgear based on the von neumann architecture. The computer program may be integrated into the application or may run as a separate tool-like application. The detection device for wearing the safety helmet in the embodiment of the present application may be a user terminal, including but not limited to: personal computers, tablet computers, handheld devices, in-vehicle devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and the like. The user terminals may be called different names in different networks, for example: user equipment, access terminal, subscriber unit, subscriber station, mobile station, remote terminal, mobile device, user terminal, wireless communication device, user agent or user equipment, cellular telephone, cordless telephone, Personal Digital Assistant (PDA), terminal equipment in a 5G network or future evolution network, and the like.
Referring to fig. 1, a schematic flow chart of a detection method for wearing a safety helmet is provided in an embodiment of the present application. As shown in fig. 1, the method of the embodiment of the present application may include the following steps:
s101, receiving a target image sent by image acquisition equipment in real time;
the image capturing device is a device which is deployed in a factory or a production workshop and captures an image, such as a camera, an electronic device with a monitoring function, and the like. The target image is an image frame of the current moment acquired by the image acquisition equipment in real time.
Generally, current headgear detection algorithms detect the head of a person wearing a headgear and the head of a person not wearing a headgear in a frame. However, this method has more false detections in complex scenes. According to statistics, a large number of safety helmet wearing heads and safety helmet non-wearing heads which do not intersect with human body regions exist in the false detection pictures.
In order to solve the problem, the application proposes to add the categories of pedestrians, namely three categories of 'head with safety helmet', 'head without safety helmet' and pedestrians, in the data set of model training. By judging whether the head of a person is in the pedestrian detection area, the problem that the safety helmet is worn or not worn by a non-living body can be obviously solved, meanwhile, the occupied computing resources are few, the effect of real-time monitoring can be achieved, and the falling of the safety helmet detection algorithm in an actual scene is facilitated.
In a possible implementation mode, a camera for image acquisition is installed at the entrance of a building site or a workshop, wherein the camera for image acquisition is in communication connection with a background user terminal, after a detection device wearing a safety helmet is started, the camera acquires image frames in real time to generate an image sequence, the camera continuously transmits the image sequence to the user terminal, the user terminal receives the image sequence sent by the camera in real time, a processing chip is installed in the user terminal, and a file corresponding to a trained model is deployed in the chip.
S102, inputting a target image into a program running on a current chip, wherein the program comprises a wk file generated based on a pre-trained helmet detection model;
the safety helmet detection model is generated based on training of a training data set, and the training data set comprises three types of detection targets; the three detection targets are the head of a person wearing a safety helmet, the head of a person not wearing a safety helmet and a pedestrian respectively; the safety helmet detection model is a YOLOv4 model, and the YOLOv4 model is generated by modifying an activation function and cutting a channel;
generally, when model training is performed, a training data set is collected first; wherein the training number data set comprises three types of detection targets; the three types of detection targets are respectively the head of a person wearing a safety helmet, the head of a person not wearing a safety helmet and a pedestrian, a YOLOv4 model is adopted to create a safety helmet detection model, a Mish activation function in the safety helmet detection model trained in advance is searched, the Mish activation function is replaced by a Leaky Relu activation function, the safety helmet detection model after the replacement function is generated, a training data set is input into the safety helmet detection model after the replacement function for training, the iteration times are counted, the trained safety helmet detection model is generated when the iteration times reach the preset iteration times, and finally the trained safety helmet detection model is determined to be the safety helmet detection model trained in advance.
Further, the method comprises the further processing of a pre-trained safety helmet detection model, wherein when the processing is carried out, a compression algorithm is firstly adopted to carry out channel cutting on the pre-trained safety helmet detection model to generate a compressed safety helmet detection model, then a wk file is generated based on the compressed safety helmet detection model, the wk file is associated with a related program, and finally the program containing the wk file is deployed into a chip; wherein the chip is a Haisi chip.
It should be noted that the whole model can be optimized after the safety helmet detection model is cut, and unnecessary convolution layers or channels are deleted, so that the whole network model can be inferred more quickly, and the real-time requirement of the whole algorithm is met. And converting the final helmet algorithm to a Haesi chip for operation, specifically converting the cut network model into a cafemodel file, and converting the cafemodel file of the helmet into a quantized wk file which can be operated on the Haesi chip.
Further, when the iteration times do not reach the preset iteration times, the step of inputting the training data set into the safety helmet detection model after the replacement function for training is continuously executed, and the trained safety helmet detection model is generated until the iteration times reach the preset iteration times.
Specifically, when the YOLOv4 compression algorithm is modified for the YOLOv4 model in the helmet detection model, channel pruning is specifically used, and some channels of the feature map are directly deleted by channel pruning. The method cuts off some redundant channels, which is equivalent to slimming the network structure, and the integrity of the network structure cannot be influenced, namely the performance of the algorithm is reserved. The selection of the channel is made through LASSO regression, namely, the L1 norm is added in the loss function to constrain the weight, and in view of optimization of the objective function, the L1 norm can enable most values in the weight to be 0, so that the weight in the channel has sparseness, and the channel of the coefficient can be cut off. Through experiments, if pruning is not carried out, the inference time of the YOLOv4 model is 95ms on the Haisi chip and 32ms after pruning, the whole inference time is shortened by three times, and the requirement of real-time property is met.
S103, outputting the types of the detected objects in the target image and the positions and the sizes of the objects;
wherein, each object comprises a head of a person wearing the safety helmet, a head of a person not wearing the safety helmet and a pedestrian. Each object position is the coordinate position of the center point of each object, and the size is the width and height of each object.
Generally, the YOLOv4 model is used herein for helmet detection. The YOLOv4 target detection algorithm can perform safety helmet detection with higher precision on the basis of meeting real-time detection, and meets the requirement of falling to the ground. The method is used for detecting whether a safety helmet is worn or not by aiming at the condition that the safety helmet detection algorithm has false detection, particularly under the condition that no person is present.
In one possible implementation manner, after model processing is performed, a plurality of pedestrians, a plurality of safety helmets and a plurality of human heads which are included in an image detected by a model are output, wherein each pedestrian, each safety helmet and each human head carry a coordinate parameter of a central position and a size parameter of a width and a height.
And S104, judging whether the pedestrian wears the safety helmet or not based on the type of each object and the position and the size of each object.
In a possible implementation mode, firstly, judging whether the head of a person wearing the safety helmet or the head of a person not wearing the safety helmet is in a pedestrian area, generating a judgment result, and when the identification of the judgment result is true, determining that the pedestrian wears the safety helmet; or when the identification of the judgment result is false, determining that the pedestrian does not wear the safety helmet.
Further, when judging whether the head of the person wearing the safety helmet and the head of the person are positioned in the pedestrian area at the same time, firstly calculating the area of the head of the person wearing the safety helmet or the head area without wearing the safety helmet based on the size, then calculating the intersection area of the head of the person wearing the safety helmet or the head area without wearing the safety helmet and the pedestrian area, then determining the ratio of the intersection area to the area of the head area with wearing the safety helmet as a first calculation result, secondly, when the first calculation result is larger than a preset threshold value, generating a judgment result marked as true, and otherwise, not judging; and determining the ratio of the intersection area to the area of the head area of the non-wearing safety helmet as a second calculation result, generating a judgment result marked as false when the second judgment result is larger than a preset threshold value, and otherwise, not judging.
For example, as shown in fig. 2, fig. 2 is a marked diagram displayed by a terminal after detection provided by the present application, and a helmet detection result includes helmet _ on and helmet _ off (helmet _ on indicates that a helmet is worn; helmet _ off indicates that a helmet is not worn).
In the embodiment of the application, a detection device for wearing a safety helmet firstly receives a target image sent by an image acquisition device in real time, and then inputs the target image into a program running on a current chip, wherein the program comprises a wk file generated based on a safety helmet detection model trained in advance, the safety helmet detection model is generated based on a training data set, and the training data set comprises three types of detection targets; the three detection targets are the head of a person wearing a safety helmet, the head of a person not wearing a safety helmet and a pedestrian respectively; the safety helmet detection model is a YOLOv4 model, and the YOLOv4 model is generated after an activation function is modified and channel clipping is carried out. And secondly, outputting the type of each object, the position and the size of each object detected in the target image, wherein each object comprises a head of a person wearing a safety helmet, a head of a person not wearing the safety helmet and a pedestrian, and finally judging whether the pedestrian wears the safety helmet or not based on the type of each object and the position and the size of each object. Because the data set of the pedestrian category is added into the trained data set, the false detection of the non-living safety helmet is solved, and the model in the application can detect whether the worker wears the safety helmet or not in real time while ensuring high-precision identification by modifying the activation function and cutting through the channel.
Referring to fig. 3, a schematic flow chart of a training method of a pre-trained crash helmet detection model is provided for the embodiment of the present application. As shown in fig. 3, the method of the embodiment of the present application may include the following steps:
s201, collecting a training data set; wherein the training number data set comprises three types of detection targets; the three detection targets are the head of a person wearing a safety helmet, the head of a person not wearing a safety helmet and a pedestrian respectively;
s202, creating a safety helmet detection model by adopting a YOLOv4 model;
s203, searching a Mish activation function in a pre-trained safety helmet detection model;
s204, replacing the Mish activation function with a Leaky Relu activation function, and generating a safety helmet detection model after replacing the function;
s205, inputting the training data set into the safety helmet detection model after replacing the function for training, and counting the iteration times;
s206, when the iteration times reach the preset iteration times, generating a trained safety helmet detection model;
s207, determining the trained safety helmet detection model as a pre-trained safety helmet detection model;
s208, channel cutting is carried out on the safety helmet detection model trained in advance by adopting a compression algorithm, and a compressed safety helmet detection model is generated;
s209, generating a wk file based on the compressed safety helmet detection model;
s210, deploying a program containing the wk file to a chip; wherein the chip is a Haisi chip.
In the embodiment of the application, a detection device for wearing a safety helmet firstly receives a target image sent by an image acquisition device in real time, and then inputs the target image into a program running on a current chip, wherein the program comprises a wk file generated based on a safety helmet detection model trained in advance, the safety helmet detection model is generated based on a training data set, and the training data set comprises three types of detection targets; the three detection targets are the head of a person wearing a safety helmet, the head of a person not wearing a safety helmet and a pedestrian respectively; the safety helmet detection model is a YOLOv4 model, and the YOLOv4 model is generated after an activation function is modified and channel clipping is carried out. And secondly, outputting the type of each object, the position and the size of each object detected in the target image, wherein each object comprises a head of a person wearing a safety helmet, a head of a person not wearing the safety helmet and a pedestrian, and finally judging whether the pedestrian wears the safety helmet or not based on the type of each object and the position and the size of each object. Because the data set of the pedestrian category is added into the trained data set, the false detection of the non-living safety helmet is solved, and the model in the application can detect whether the worker wears the safety helmet or not in real time while ensuring high-precision identification by modifying the activation function and cutting through the channel.
The following are embodiments of the apparatus of the present invention that may be used to perform embodiments of the method of the present invention. For details which are not disclosed in the embodiments of the apparatus of the present invention, reference is made to the embodiments of the method of the present invention.
Referring to fig. 4, a schematic structural diagram of a detection device for wearing a safety helmet according to an exemplary embodiment of the present invention is shown. The detection device for the wearing of the safety helmet can be realized by software, hardware or a combination of the two to form all or part of the terminal. The device 1 comprises an image receiving module 10, an image input module 20, a parameter output module 30 and a judging module 40.
The image receiving module 10 is used for receiving a target image sent by the image acquisition equipment in real time;
the image input module 20 is used for inputting a target image into a program running on a current chip, wherein the program comprises a wk file generated based on a pre-trained helmet detection model;
the safety helmet detection model is generated based on training of a training data set, and the training data set comprises three types of detection targets; the three detection targets are the head of a person wearing a safety helmet, the head of a person not wearing a safety helmet and a pedestrian respectively; the safety helmet detection model is a YOLOv4 model, and the YOLOv4 model is generated by modifying an activation function and cutting a channel;
a parameter output module 30 for outputting the type of each object detected in the target image, and the position and size of each object; wherein, each object comprises a head of a person wearing the safety helmet, a head of a person not wearing the safety helmet and a pedestrian;
and the determination module 40 is used for determining whether the pedestrian wears the safety helmet or not based on the type of each object and the position and the size of each object.
It should be noted that, when the detection device for wearing a hard hat provided in the above embodiment executes the detection method for wearing a hard hat, the above division of each functional module is merely exemplified, and in practical applications, the above functions may be distributed to different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. In addition, the detection device for wearing the safety helmet and the detection method embodiment for wearing the safety helmet provided by the above embodiments belong to the same concept, and the detailed implementation process is shown in the method embodiment, which is not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the embodiment of the application, a detection device for wearing a safety helmet firstly receives a target image sent by an image acquisition device in real time, and then inputs the target image into a program running on a current chip, wherein the program comprises a wk file generated based on a safety helmet detection model trained in advance, the safety helmet detection model is generated based on a training data set, and the training data set comprises three types of detection targets; the three detection targets are the head of a person wearing a safety helmet, the head of a person not wearing a safety helmet and a pedestrian respectively; the safety helmet detection model is a YOLOv4 model, and the YOLOv4 model is generated after an activation function is modified and channel clipping is carried out. And secondly, outputting the type of each object, the position and the size of each object detected in the target image, wherein each object comprises a head of a person wearing a safety helmet, a head of a person not wearing the safety helmet and a pedestrian, and finally judging whether the pedestrian wears the safety helmet or not based on the type of each object and the position and the size of each object. Because the data set of the pedestrian category is added into the trained data set, the false detection of the non-living safety helmet is solved, and the model in the application can detect whether the worker wears the safety helmet or not in real time while ensuring high-precision identification by modifying the activation function and cutting through the channel.
The present invention also provides a computer readable medium having stored thereon program instructions which, when executed by a processor, implement the method of detecting a wearing of a crash helmet provided by the various method embodiments described above.
The present invention also provides a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of detection of a worn crash helmet of the various method embodiments described above.
Please refer to fig. 5, which provides a schematic structural diagram of a terminal according to an embodiment of the present application. As shown in fig. 5, terminal 1000 can include: at least one processor 1001, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002.
Wherein a communication bus 1002 is used to enable connective communication between these components.
The user interface 1003 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Processor 1001 may include one or more processing cores, among other things. The processor 1001 interfaces various components throughout the electronic device 1000 using various interfaces and lines to perform various functions of the electronic device 1000 and to process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1005 and invoking data stored in the memory 1005. Alternatively, the processor 1001 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1001 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 1001, but may be implemented by a single chip.
The Memory 1005 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1005 includes a non-transitory computer-readable medium. The memory 1005 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1005 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 5, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a detection application program for wearing a hard hat.
In the terminal 1000 shown in fig. 5, the user interface 1003 is mainly used as an interface for providing input for a user, and acquiring data input by the user; and processor 1001 may be used to invoke a crash helmet detection application stored in memory 1005 and perform the following operations in particular:
receiving a target image sent by image acquisition equipment in real time;
inputting a target image into a program which runs on a current chip, wherein the program comprises a wk file generated based on a pre-trained helmet detection model;
the safety helmet detection model is generated based on training of a training data set, and the training data set comprises three types of detection targets; the three detection targets are the head of a person wearing a safety helmet, the head of a person not wearing a safety helmet and a pedestrian respectively; the safety helmet detection model is a YOLOv4 model, and the YOLOv4 model is generated by modifying an activation function and cutting a channel;
outputting the type of each detected object in the target image and the position and the size of each object; wherein, each object comprises a head of a person wearing the safety helmet, a head of a person not wearing the safety helmet and a pedestrian;
whether the pedestrian wears the crash helmet is determined based on the type of each object, the position and the size of each object.
In one embodiment, the processor 1001, when performing the determination of whether the pedestrian wears the crash helmet based on the type of each object, the position and the size of each object, specifically performs the following operations:
judging whether the head of the person wearing the safety helmet or the head of the person not wearing the safety helmet is in a pedestrian area or not, and generating a judgment result;
when the identification of the judgment result is true, determining that the pedestrian wears the safety helmet;
or
And when the identification of the judgment result is false, determining that the pedestrian does not wear the safety helmet.
In one embodiment, the processor 1001, in performing the determination of whether the head of the person wearing the crash helmet or the head of the person not wearing the crash helmet is in the pedestrian area, and generating the determination result, specifically performs the following operations:
calculating an area of the head of the wearer or the head of the non-wearer based on the size;
calculating the intersection area of the head of the person wearing the safety helmet or the head area without wearing the safety helmet and the pedestrian area;
determining a ratio of the intersection area to the area of the head region of the wearer as a first calculation result;
when the first calculation result is larger than a preset threshold value, generating a judgment result marked as true, otherwise, not judging;
alternatively, the first and second electrodes may be,
determining a ratio of the intersection area to the area of the head region of the non-worn safety helmet as a second calculation result; and when the second judgment result is larger than a preset threshold value, generating a judgment result marked as false, otherwise, not judging.
In the embodiment of the application, a detection device for wearing a safety helmet firstly receives a target image sent by an image acquisition device in real time, and then inputs the target image into a program running on a current chip, wherein the program comprises a wk file generated based on a safety helmet detection model trained in advance, the safety helmet detection model is generated based on a training data set, and the training data set comprises three types of detection targets; the three detection targets are the head of a person wearing a safety helmet, the head of a person not wearing a safety helmet and a pedestrian respectively; the safety helmet detection model is a YOLOv4 model, and the YOLOv4 model is generated after an activation function is modified and channel clipping is carried out. And secondly, outputting the type of each object, the position and the size of each object detected in the target image, wherein each object comprises a head of a person wearing a safety helmet, a head of a person not wearing the safety helmet and a pedestrian, and finally judging whether the pedestrian wears the safety helmet or not based on the type of each object and the position and the size of each object. Because the data set of the pedestrian category is added into the trained data set, the false detection of the non-living safety helmet is solved, and the model in the application can detect whether the worker wears the safety helmet or not in real time while ensuring high-precision identification by modifying the activation function and cutting through the channel.
It will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by a computer program to instruct associated hardware, and the program for detecting wearing of a crash helmet may be stored in a computer readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (10)

1. A method of detecting the wearing of a crash helmet, the method comprising:
receiving a target image sent by image acquisition equipment in real time;
inputting the target image into a program running on a current chip, wherein the program comprises a wk file generated based on a pre-trained helmet detection model;
the safety helmet detection model is generated based on training of a training data set, and the training data set comprises three types of detection targets; the three detection targets are the head of a person wearing a safety helmet, the head of a person not wearing a safety helmet and a pedestrian respectively; the safety helmet detection model is a YOLOv4 model, and the YOLOv4 model is generated by modifying an activation function and performing channel clipping;
outputting the type of each detected object in the target image and the position and the size of each detected object; wherein, the objects comprise the head of a person wearing the safety helmet, the head of a person not wearing the safety helmet and pedestrians;
determining whether the pedestrian wears a crash helmet based on the type of each object, the position and the size of each object.
2. The method of claim 1, wherein the determining whether the pedestrian is wearing a crash helmet based on the type of the objects, the location of the objects, and the size comprises:
judging whether the head of the person wearing the safety helmet or the head of the person not wearing the safety helmet is in the pedestrian area or not, and generating a judgment result;
when the identification of the judgment result is true, determining that the pedestrian wears the safety helmet;
or
And when the identification of the judgment result is false, determining that the pedestrian does not wear the safety helmet.
3. The method of claim 2, wherein the determining whether the head of the person wearing the hard hat or the head of the person not wearing the hard hat is within the pedestrian area generates a determination result comprising:
calculating an area of the head of the wearer or the head of the non-wearer based on the size;
calculating the intersection area of the head of the person wearing the safety helmet or the head area without wearing the safety helmet and the pedestrian area;
determining a ratio of the intersection area to the area of the head region of the wearer as a first calculation result;
when the first calculation result is larger than a preset threshold value, generating a judgment result marked as true, otherwise, not judging;
alternatively, the first and second electrodes may be,
determining a ratio of the intersection area to the area of the head region of the non-worn safety helmet as a second calculation result;
and when the second judgment result is larger than a preset threshold value, generating a judgment result marked as false, otherwise, not judging.
4. The method of claim 1, wherein generating a pre-trained hard hat inspection model comprises:
collecting a training data set; wherein the training number data set comprises three types of detection targets; the three detection targets are the head of a person wearing a safety helmet, the head of a person not wearing a safety helmet and a pedestrian respectively;
creating a safety helmet detection model by adopting a YOLOv4 model;
searching a Mish activation function in the pre-trained safety helmet detection model;
replacing the Mish activation function with a Leaky Relu activation function to generate a safety helmet detection model after replacing the function;
inputting the training data set into the safety helmet detection model after the replacement function for training, and counting iteration times;
when the iteration times reach preset iteration times, generating a trained safety helmet detection model;
and determining the trained safety helmet detection model as a pre-trained safety helmet detection model.
5. The method of claim 1, further comprising:
channel cutting is carried out on the pre-trained safety helmet detection model by adopting a compression algorithm, and a compressed safety helmet detection model is generated;
generating a wk file based on the compressed safety helmet detection model;
deploying a program containing the wk file into a chip; wherein, the chip is a Haisi chip.
6. The method of claim 5, wherein generating a wk file based on the compressed hard hat detection model comprises:
converting the pre-trained helmet detection model into a file in a califfemod format;
converting the files in the califfemod format into wk files;
and carrying out quantization processing on the wk file to generate the wk file.
7. The method of claim 4, wherein generating the trained hard hat detection model when the number of iterations reaches a preset number of iterations comprises:
and when the iteration times do not reach the preset iteration times, continuing to execute the step of inputting the training data set into the safety helmet detection model after the replacement function for training until the iteration times reach the preset iteration times to generate the trained safety helmet detection model.
8. A detection device for wearing a crash helmet, said device comprising:
the image receiving module is used for receiving a target image sent by the image acquisition equipment in real time;
the image input module is used for inputting the target image into a program running on a current chip, and the program comprises a wk file generated based on a pre-trained safety helmet detection model;
the safety helmet detection model is generated based on training of a training data set, and the training data set comprises three types of detection targets; the three detection targets are the head of a person wearing a safety helmet, the head of a person not wearing a safety helmet and a pedestrian respectively; the safety helmet detection model is a YOLOv4 model, and the YOLOv4 model is generated by modifying an activation function and performing channel clipping;
a parameter output module for outputting the type of each object detected in the target image, and the position and size of each object; wherein, the objects comprise the head of a person wearing the safety helmet, the head of a person not wearing the safety helmet and pedestrians;
and the judging module is used for judging whether the pedestrian wears the safety helmet or not based on the type of each object and the position and the size of each object.
9. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to perform the method steps according to any of claims 1-7.
10. A terminal, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1-7.
CN202110267693.0A 2021-03-12 2021-03-12 Detection method and device for wearing safety helmet, storage medium and terminal Pending CN113139426A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110267693.0A CN113139426A (en) 2021-03-12 2021-03-12 Detection method and device for wearing safety helmet, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110267693.0A CN113139426A (en) 2021-03-12 2021-03-12 Detection method and device for wearing safety helmet, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN113139426A true CN113139426A (en) 2021-07-20

Family

ID=76811049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110267693.0A Pending CN113139426A (en) 2021-03-12 2021-03-12 Detection method and device for wearing safety helmet, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN113139426A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372662A (en) * 2016-08-30 2017-02-01 腾讯科技(深圳)有限公司 Helmet wearing detection method and device, camera, and server
CN108875481A (en) * 2017-08-31 2018-11-23 北京旷视科技有限公司 Method, apparatus, system and storage medium for pedestrian detection
CN110070033A (en) * 2019-04-19 2019-07-30 山东大学 Safety cap wearing state detection method in a kind of power domain dangerous work region
CN110119686A (en) * 2019-04-17 2019-08-13 电子科技大学 A kind of safety cap real-time detection method based on convolutional neural networks
CN110263686A (en) * 2019-06-06 2019-09-20 温州大学 A kind of construction site safety of image cap detection method based on deep learning
CN110889376A (en) * 2019-11-28 2020-03-17 创新奇智(南京)科技有限公司 Safety helmet wearing detection system and method based on deep learning
AU2020100711A4 (en) * 2020-05-05 2020-06-11 Chang, Cheng Mr The retrieval system of wearing safety helmet based on deep learning
AU2020100705A4 (en) * 2020-05-05 2020-06-18 Chang, Jiaying Miss A helmet detection method with lightweight backbone based on yolov3 network
CN111401278A (en) * 2020-03-20 2020-07-10 重庆紫光华山智安科技有限公司 Helmet identification method and device, electronic equipment and storage medium
CN111709489A (en) * 2020-06-24 2020-09-25 广西师范大学 Citrus identification method based on improved YOLOv4
CN111815577A (en) * 2020-06-23 2020-10-23 深圳供电局有限公司 Method, device, equipment and storage medium for processing safety helmet wearing detection model

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372662A (en) * 2016-08-30 2017-02-01 腾讯科技(深圳)有限公司 Helmet wearing detection method and device, camera, and server
CN108875481A (en) * 2017-08-31 2018-11-23 北京旷视科技有限公司 Method, apparatus, system and storage medium for pedestrian detection
CN110119686A (en) * 2019-04-17 2019-08-13 电子科技大学 A kind of safety cap real-time detection method based on convolutional neural networks
CN110070033A (en) * 2019-04-19 2019-07-30 山东大学 Safety cap wearing state detection method in a kind of power domain dangerous work region
CN110263686A (en) * 2019-06-06 2019-09-20 温州大学 A kind of construction site safety of image cap detection method based on deep learning
CN110889376A (en) * 2019-11-28 2020-03-17 创新奇智(南京)科技有限公司 Safety helmet wearing detection system and method based on deep learning
CN111401278A (en) * 2020-03-20 2020-07-10 重庆紫光华山智安科技有限公司 Helmet identification method and device, electronic equipment and storage medium
AU2020100711A4 (en) * 2020-05-05 2020-06-11 Chang, Cheng Mr The retrieval system of wearing safety helmet based on deep learning
AU2020100705A4 (en) * 2020-05-05 2020-06-18 Chang, Jiaying Miss A helmet detection method with lightweight backbone based on yolov3 network
CN111815577A (en) * 2020-06-23 2020-10-23 深圳供电局有限公司 Method, device, equipment and storage medium for processing safety helmet wearing detection model
CN111709489A (en) * 2020-06-24 2020-09-25 广西师范大学 Citrus identification method based on improved YOLOv4

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王衍: "基于yolov3的安全帽佩戴检测方法研究", 《自动化仪表》 *

Similar Documents

Publication Publication Date Title
CN112216049B (en) Construction warning area monitoring and early warning system and method based on image recognition
CN113255606A (en) Behavior recognition method and device, computer equipment and storage medium
CN109409238B (en) Obstacle detection method and device and terminal equipment
CN107341443B (en) Method for processing video frequency, device and storage medium
CN111914819A (en) Multi-camera fusion crowd density prediction method and device, storage medium and terminal
CN110619314A (en) Safety helmet detection method and device and electronic equipment
CN110659391A (en) Video detection method and device
CN116152863B (en) Personnel information identification method and device, electronic equipment and storage medium
CN111191507A (en) Safety early warning analysis method and system for smart community
CN115168024A (en) Early warning method, device and equipment based on edge calculation
CN115294528A (en) Pedestrian safety monitoring method and device
CN111860187A (en) High-precision worn mask identification method and system
CN114241012B (en) High-altitude parabolic determination method and device
CN113963162A (en) Helmet wearing identification method and device, computer equipment and storage medium
CN113627321A (en) Image identification method and device based on artificial intelligence and computer equipment
CN112528825A (en) Station passenger recruitment service method based on image recognition
CN113139426A (en) Detection method and device for wearing safety helmet, storage medium and terminal
CN112541456A (en) Ultraviolet lamp autonomous control method and device, storage medium and ultraviolet disinfection robot
CN114821486B (en) Personnel identification method in power operation scene
CN115190277B (en) Safety monitoring method, device and equipment for construction area and storage medium
CN113469150B (en) Method and system for identifying risk behaviors
CN115953815A (en) Monitoring method and device for infrastructure site
CN112862073B (en) Compressed data analysis method and device, storage medium and terminal
CN115546722A (en) Smoke and fire detection method and system, computer equipment and storage medium
CN115641607A (en) Method, device, equipment and storage medium for detecting wearing behavior of power construction site operator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210720

RJ01 Rejection of invention patent application after publication