CN114882552A - Method for checking wearing state of person mask in operation vehicle based on deep learning - Google Patents

Method for checking wearing state of person mask in operation vehicle based on deep learning Download PDF

Info

Publication number
CN114882552A
CN114882552A CN202210405121.9A CN202210405121A CN114882552A CN 114882552 A CN114882552 A CN 114882552A CN 202210405121 A CN202210405121 A CN 202210405121A CN 114882552 A CN114882552 A CN 114882552A
Authority
CN
China
Prior art keywords
mask
data set
wearing
face
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210405121.9A
Other languages
Chinese (zh)
Inventor
吴丁泓
江培舟
李旭芳
蔡伟兵
陈泽斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Gnss Development & Application Co ltd
Original Assignee
Xiamen Gnss Development & Application Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Gnss Development & Application Co ltd filed Critical Xiamen Gnss Development & Application Co ltd
Priority to CN202210405121.9A priority Critical patent/CN114882552A/en
Publication of CN114882552A publication Critical patent/CN114882552A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a medium for checking the wearing state of a mask of personnel in an operation vehicle based on deep learning, wherein the method comprises the following steps: acquiring an image to be detected in the commercial vehicle through a shooting device arranged corresponding to the commercial vehicle, and inputting the image to be detected into a pre-trained face detection model; judging whether a face exists in the image to be detected or not through the face detection model, and extracting a face region picture corresponding to the face when the judgment result is yes; inputting the face area picture into a pre-trained mask wearing detection two-classification model, judging whether a face in the face area picture is worn with a mask or not through the mask wearing detection two-classification model, and alarming when the judgment result is negative; the accuracy that personnel's gauze mask wearing state detected in the vehicle of operation can effectively be improved.

Description

Method for checking wearing state of person mask in operation vehicle based on deep learning
Technical Field
The invention relates to the technical field of image recognition, in particular to a method for detecting the wearing state of a mask of personnel in an operation vehicle based on deep learning and a computer readable storage medium.
Background
In the related art, when an image to be detected needs to be identified through an image identification model, whether a person wears a mask is judged. The method mainly adopts the mode of directly inputting the image into a unified recognition model so as to detect the image to be detected through one unified recognition model. The mode is characterized in that the detection precision is low and the condition of missed detection is easy to occur under the conditions of more people in the vehicle and poor light.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the art described above. Therefore, an object of the present invention is to provide a method for inspecting wearing state of a mask for personnel in a commercial vehicle based on deep learning, which can effectively improve the accuracy of detecting wearing state of the mask for personnel in the commercial vehicle.
A second object of the invention is to propose a computer-readable storage medium.
In order to achieve the above object, a first embodiment of the present invention provides a method for inspecting wearing state of a mask for people in an operating vehicle based on deep learning, including the following steps: acquiring an image to be detected in the commercial vehicle through a shooting device arranged corresponding to the commercial vehicle, inputting the image to be detected into a human face detection model trained in advance, judging whether a human face exists in the image to be detected through the human face detection model, and extracting a human face area picture corresponding to the human face when the judgment result is yes; inputting the face area picture into a pre-trained mask wearing detection two-classification model, judging whether a face in the face area picture is worn with a mask or not through the mask wearing detection two-classification model, and giving an alarm if the judgment result is negative.
According to the inspection method for the wearing state of the mask for personnel in the commercial vehicle based on deep learning, firstly, a shooting device arranged corresponding to the commercial vehicle is used for obtaining an image to be detected in the commercial vehicle, the image to be detected is input into a human face detection model trained in advance, then, whether a human face exists in the image to be detected is judged through the human face detection model, and a human face area picture corresponding to the human face is extracted when the judgment result is yes; then, inputting the face area picture into a pre-trained mask wearing detection two-classification model, judging whether a face in the face area picture is worn with a mask or not through the mask wearing detection two-classification model, and alarming when the judgment result is negative; therefore, the accuracy of detecting the wearing state of the mask of personnel in the commercial vehicle is effectively improved.
In addition, the inspection method for the wearing state of the mask for the person in the commercial vehicle based on the deep learning according to the embodiment of the present invention may further have the following additional technical features:
optionally, the obtaining of the face detection data set for training the face detection model comprises: acquiring a mask shielding face data set, a public data set and a commercial vehicle historical image data set, labeling the mask shielding face data set, the public data set and the commercial vehicle historical image data set, and performing data cleaning and data enhancement on the labeled mask shielding face data set, the public data set and the commercial vehicle historical image data set; and converting the mask shielding face data set, the public data set and the commercial vehicle historical image data set into a unified format after data enhancement, and dividing the mask shielding face data set, the public data set and the commercial vehicle historical image data set into a training data set and a testing data set after the unified format.
Optionally, the unified format is a VOC data set format.
Optionally, the data enhancement comprises randomly perturbing, flipping, cropping, and downsampling pictures in the mask-occluded face dataset, the public dataset, and the trolley vehicle historical image dataset.
Optionally, the face detection model adopts a PyramidBox network structure.
Optionally, the obtaining of the mask classification dataset for training the mask wearing detection two-classification model comprises: and extracting the face region pictures of each image in the mask shielding face data set, the public data set and the trolley historical image data set according to the labeling result, labeling the extracted face region pictures to generate a mask wearing data set and a mask not wearing data set, and performing data enhancement on the data in the mask wearing data set and the mask not wearing data set.
Optionally, before performing data enhancement on the data in the mask worn data set and the mask not worn data set, further comprising: and filtering the mask wearing data set and the mask not wearing data set to delete the pictures with the number of single-side pixels lower than the threshold value of the number of the pixels in the mask wearing data set and the mask not wearing data set.
Optionally, before performing data enhancement on the data in the mask worn data set and the mask not worn data set, further comprising: and performing data balance on the mask wearing data set and the mask not wearing data set, so that the ratio of the number of pictures in the mask wearing data set to the number of pictures in the mask not wearing data set is greater than or equal to 0.4.
Optionally, the mask wearing detection two-classification model is a lightweight convolutional neural network model, and the mask wearing detection two-classification model includes an SE module.
In order to achieve the above object, a second aspect of the present invention provides a computer-readable storage medium having stored thereon a deep learning-based operating in-vehicle occupant mask wearing state inspection program, which, when executed by a processor, implements the deep learning-based operating in-vehicle occupant mask wearing state inspection method as described above.
According to the computer-readable storage medium of the embodiment of the invention, the in-service personnel mask wearing state inspection program based on the deep learning is stored, so that the processor can realize the in-service personnel mask wearing state inspection method based on the deep learning when executing the in-service personnel mask wearing state inspection program based on the deep learning, and the accuracy of detecting the in-service personnel mask wearing state in the service vehicle can be effectively improved.
Drawings
Fig. 1 is a schematic flow chart of a method for inspecting a wearing state of a mask for people in an operating vehicle based on deep learning according to an embodiment of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
In the related art, when the personnel mask is worn and checked, the detection precision is low under the conditions of more personnel in the vehicle and poor light, and the condition of missed detection is easy to occur. According to the inspection method for the wearing state of the mask for personnel in the commercial vehicle based on deep learning, firstly, a shooting device arranged corresponding to the commercial vehicle is used for obtaining an image to be detected in the commercial vehicle, the image to be detected is input into a human face detection model trained in advance, then, whether a human face exists in the image to be detected is judged through the human face detection model, and a human face area picture corresponding to the human face is extracted when the judgment result is yes; then, inputting the face area picture into a pre-trained mask wearing detection two-classification model, judging whether a face in the face area picture is worn with a mask or not through the mask wearing detection two-classification model, and alarming when the judgment result is negative; therefore, the accuracy of detecting the wearing state of the mask of personnel in the commercial vehicle is effectively improved.
In order to better understand the above technical solutions, exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
Fig. 1 is a schematic flow chart of a method for inspecting a wearing state of an in-service personal mask based on deep learning according to an embodiment of the present invention, and as shown in fig. 1, the method for inspecting a wearing state of an in-service personal mask based on deep learning includes the following steps:
s101, acquiring an image to be detected in the commercial vehicle through a shooting device arranged corresponding to the commercial vehicle, and inputting the image to be detected into a pre-trained face detection model.
That is, the image to be detected in the commercial vehicle is acquired by a shooting device arranged on the commercial vehicle (such as a bus, a long-distance passenger car, a taxi and the like); wherein, the obtaining mode can be various; for example, by presetting a time interval, to perform acquisition of images according to the time interval; or acquiring the opening information of the vehicle door, and acquiring an image after acquiring the opening information; then, the acquired image is input to a human face detection model trained in advance.
S102, judging whether a human face exists in the image to be detected through the human face detection model, and extracting a human face area picture corresponding to the human face when the judgment result is yes.
S103, inputting the face area picture into a pre-trained mask wearing detection two-classification model, judging whether the face in the face area picture is worn with a mask or not through the mask wearing detection two-classification model, and giving an alarm if the judgment result is negative.
That is, after an image to be detected is acquired, firstly, whether a face exists in the image to be detected is judged through a face detection model, and if so, a face region image is extracted; then, judging whether the face in the face area picture is worn with a mask or not through a mask wearing detection two classification model; thereby can effectively improve the mask and wear the rate of accuracy that detects.
In some embodiments, the obtaining of the face detection data set for training the face detection model comprises: acquiring a mask shielding face data set, a public data set and a commercial vehicle historical image data set, labeling the mask shielding face data set, the public data set and the commercial vehicle historical image data set, and performing data cleaning and data enhancement on the labeled mask shielding face data set, the labeled public data set and the labeled commercial vehicle historical image data set; and converting the mask shielding face data set, the public data set and the commercial vehicle historical image data set into a unified format after data enhancement, and dividing the mask shielding face data set, the public data set and the commercial vehicle historical image data set into a training data set and a testing data set after the unified format.
As an example, for a public data set, first, the data annotation condition is checked; then, after the verification is finished, the format is uniformly converted into the xml format. For a commercial vehicle historical image data set (for example, a historical picture data set shot by vehicle-mounted equipment), labeling by using labelImg software to label a face area contained in each picture in the commercial vehicle historical image data set; and after the labeling is completed, converting the data set into a VOC data set format.
In some embodiments, the unified format is a VOC data set format.
In some embodiments, the data enhancement includes random perturbation, flipping, cropping, and downsampling of pictures in the mask face dataset, the public dataset, and the cart history image dataset.
In some embodiments, the face detection model employs a PyramidBox network architecture.
As an example, the face detection model adopts a PyramidBox network structure; because the context sensitive prediction module is introduced into the model, the context information such as shoulders, bodies and the like is considered in the detection process, and the human face detection capability is stronger than that of other general target detection algorithms under the conditions of wearing a mask, shielding the face and blurring the image. The model can effectively improve the detection capability of the human face in the motor vehicle. And performing data enhancement on training data, including random disturbance, turning, cutting, down-sampling and the like, then uniformly scaling the picture to 640 x 640 resolution, inputting the picture into the model, training, finally obtaining face bounding box information predicted by the model, and detecting all face positions in the picture.
In some embodiments, the obtaining of the mask classification dataset for training the mask wear detection dichotomy model comprises: and extracting the face region pictures of all images in the mask shielding face data set, the public data set and the commercial vehicle historical image data set according to the labeling result, labeling the extracted face region pictures to generate a mask wearing data set and a mask not wearing data set, and performing data enhancement on the data in the mask wearing data set and the mask not wearing data set.
In some embodiments, prior to data enhancement of the data in the mask worn data set and the mask not worn data set, further comprising: and filtering the mask wearing data set and the mask non-wearing data set to delete the pictures with the number of single-side pixels lower than the threshold value of the number of the pixels in the mask wearing data set and the mask non-wearing data set.
In some embodiments, prior to data enhancement of the data in the mask worn data set and the mask not worn data set, further comprising: and carrying out data balance on the wearing mask data set and the non-wearing mask data set so that the ratio of the number of pictures in the wearing mask data set to the number of pictures in the non-wearing mask data set is more than or equal to 0.4.
In some embodiments, the mask wearing detection dichotomy model is a lightweight convolutional neural network model, and the mask wearing detection dichotomy model includes an SE module (Squeeze-and-Excitation module).
As an example, using a lightweight convolutional neural network model MobileNet, in order to improve the recognition effect, an SE (space-and-Excitation) module is added in the network, an attention mechanism is introduced into MobileNet, and for an H × W × C input image, the image is stretched into 1 × 1 × C through global pooling (global position) and full connection, and then multiplied by the original image to give a weight to each channel. Attention can be added to a mask recognition task, low-weight noise points are removed, the recognition effect is improved, and a two-classification recognition model for wearing the mask is trained through a MobileNet + SE model. And inputting the detected face region picture into the classification network to obtain a result of whether the mask is worn or not.
In summary, according to the inspection method for the wearing state of the mask for people in the commercial vehicle based on the deep learning of the embodiment of the invention, firstly, an image to be detected in the commercial vehicle is obtained through a shooting device arranged corresponding to the commercial vehicle, the image to be detected is input into a human face detection model trained in advance, then, whether a human face exists in the image to be detected is judged through the human face detection model, and a human face area picture corresponding to the human face is extracted when the judgment result is yes; then, inputting the face area picture into a pre-trained mask wearing detection two-classification model, judging whether a face in the face area picture is worn with a mask or not through the mask wearing detection two-classification model, and alarming when the judgment result is negative; therefore, the accuracy of detecting the wearing state of the mask of personnel in the commercial vehicle is effectively improved.
In order to achieve the above-described embodiments, an embodiment of the present invention proposes a computer-readable storage medium on which a deep learning-based operating in-vehicle mask wearing state inspection program is stored, which, when executed by a processor, implements the deep learning-based operating in-vehicle mask wearing state inspection method as described above.
According to the computer-readable storage medium of the embodiment of the invention, the in-service personnel mask wearing state inspection program based on the deep learning is stored, so that the processor can realize the in-service personnel mask wearing state inspection method based on the deep learning when executing the in-service personnel mask wearing state inspection program based on the deep learning, and the accuracy of detecting the in-service personnel mask wearing state in the service vehicle can be effectively improved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
In the description of the present invention, it is to be understood that the terms "first", "second" and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through an intermediate. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above should not be understood to necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A method for checking the wearing state of a mask of personnel in an operation vehicle based on deep learning is characterized by comprising the following steps:
acquiring an image to be detected in the commercial vehicle through a shooting device arranged corresponding to the commercial vehicle, and inputting the image to be detected into a pre-trained face detection model;
judging whether a face exists in the image to be detected or not through the face detection model, and extracting a face region picture corresponding to the face when the judgment result is yes;
inputting the face area picture into a pre-trained mask wearing detection two-classification model, judging whether a face in the face area picture is worn with a mask or not through the mask wearing detection two-classification model, and giving an alarm if the judgment result is negative.
2. The in-transit vehicle mask wearing state inspection method based on deep learning of claim 1, wherein the acquisition of the face detection data set for training the face detection model includes:
acquiring a mask shielding face data set, a public data set and a commercial vehicle historical image data set, labeling the mask shielding face data set, the public data set and the commercial vehicle historical image data set, and performing data cleaning and data enhancement on the labeled mask shielding face data set, the public data set and the commercial vehicle historical image data set;
and converting the mask shielding face data set, the public data set and the commercial vehicle historical image data set into a unified format after data enhancement, and dividing the mask shielding face data set, the public data set and the commercial vehicle historical image data set into a training data set and a testing data set after the unified format.
3. The method for inspecting mask wearing state of in-service masks based on deep learning according to claim 2, wherein the unified format is a VOC data set format.
4. The method of claim 1, wherein the data enhancement comprises randomly perturbing, flipping, cropping, and downsampling pictures in the mask-occluded face dataset, the public dataset, and the commercial vehicle historical image dataset.
5. The method for inspecting mask wearing state of in-service masks according to claim 1, wherein the face detection model adopts a PyramidBox network structure.
6. The method for inspecting mask wearing state of in-service personnel based on deep learning of claim 2, wherein the acquisition of the mask classification dataset for training the mask wearing detection two-classification model comprises:
and extracting the face region pictures of each image in the mask shielding face data set, the public data set and the trolley historical image data set according to the labeling result, labeling the extracted face region pictures to generate a mask wearing data set and a mask not wearing data set, and performing data enhancement on the data in the mask wearing data set and the mask not wearing data set.
7. The method for inspecting mask wearing state of in-service masks based on deep learning according to claim 6, further comprising, before data enhancement of the data in the mask wearing data set and the mask not wearing data set: and filtering the mask wearing data set and the mask not wearing data set to delete the pictures with the number of single-side pixels lower than the threshold value of the number of the pixels in the mask wearing data set and the mask not wearing data set.
8. The method for inspecting mask wearing state of in-service masks based on deep learning according to claim 6, further comprising, before data enhancement of the data in the mask wearing data set and the mask not wearing data set: and performing data balance on the mask wearing data set and the mask not wearing data set, so that the ratio of the number of pictures in the mask wearing data set to the number of pictures in the mask not wearing data set is greater than or equal to 0.4.
9. The method for inspecting mask wearing state of people in commercial vehicles based on deep learning of claim 1, wherein the two-classification model for mask wearing detection is a lightweight convolutional neural network model, and the two-classification model for mask wearing detection comprises an SE module.
10. A computer-readable storage medium, on which a deep learning-based in-transit vehicle interior mask wearing state inspection program is stored, which when executed by a processor, implements the deep learning-based in-transit vehicle interior mask wearing state inspection method according to any one of claims 1 to 9.
CN202210405121.9A 2022-04-18 2022-04-18 Method for checking wearing state of person mask in operation vehicle based on deep learning Pending CN114882552A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210405121.9A CN114882552A (en) 2022-04-18 2022-04-18 Method for checking wearing state of person mask in operation vehicle based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210405121.9A CN114882552A (en) 2022-04-18 2022-04-18 Method for checking wearing state of person mask in operation vehicle based on deep learning

Publications (1)

Publication Number Publication Date
CN114882552A true CN114882552A (en) 2022-08-09

Family

ID=82670285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210405121.9A Pending CN114882552A (en) 2022-04-18 2022-04-18 Method for checking wearing state of person mask in operation vehicle based on deep learning

Country Status (1)

Country Link
CN (1) CN114882552A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524573A (en) * 2023-05-19 2023-08-01 北京弘治锐龙教育科技有限公司 Abnormal article and mask detection system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524573A (en) * 2023-05-19 2023-08-01 北京弘治锐龙教育科技有限公司 Abnormal article and mask detection system

Similar Documents

Publication Publication Date Title
CN106314438B (en) The detection method and system of abnormal track in a kind of driver driving track
WO2020042984A1 (en) Vehicle behavior detection method and apparatus
JP6757758B2 (en) How to split images of inspection equipment and vehicles
CN109800682B (en) Driver attribute identification method and related product
CN109460787B (en) Intrusion detection model establishing method and device and data processing equipment
CN110766039B (en) Muck truck transportation state identification method, medium, equipment and muck truck
CN104239847B (en) Driving warning method and electronic device for vehicle
CN115223130A (en) Multi-task panoramic driving perception method and system based on improved YOLOv5
CN112967252A (en) Rail vehicle machine sense hanger assembly bolt loss detection method
CN114882552A (en) Method for checking wearing state of person mask in operation vehicle based on deep learning
CN111310650A (en) Vehicle riding object classification method and device, computer equipment and storage medium
CN116935361A (en) Deep learning-based driver distraction behavior detection method
CN111563468A (en) Driver abnormal behavior detection method based on attention of neural network
Doycheva et al. Computer vision and deep learning for real-time pavement distress detection
Jaworek-Korjakowska et al. SafeSO: interpretable and explainable deep learning approach for seat occupancy classification in vehicle interior
US20200143181A1 (en) Automated Vehicle Occupancy Detection
CN114973156B (en) Night muck car detection method based on knowledge distillation
JP6961041B1 (en) Abnormality notification system and abnormality notification method
CN112837326B (en) Method, device and equipment for detecting carryover
Dhoundiyal et al. Deep Learning Framework for Automated Pothole Detection
CN113139473A (en) Safety belt detection method, device, equipment and medium
Wang et al. Enhancing YOLOv7-Based Fatigue Driving Detection through the Integration of Coordinate Attention Mechanism
Maligalig et al. Machine Vision System of Emergency Vehicle Detection System Using Deep Transfer Learning
CN111401104A (en) Training method, classification method, device, equipment and storage medium of classification model
CN113658112B (en) Bow net anomaly detection method based on template matching and neural network algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination