CN113139444A - Space-time attention mask wearing real-time detection method based on MobileNet V2 - Google Patents

Space-time attention mask wearing real-time detection method based on MobileNet V2 Download PDF

Info

Publication number
CN113139444A
CN113139444A CN202110376357.XA CN202110376357A CN113139444A CN 113139444 A CN113139444 A CN 113139444A CN 202110376357 A CN202110376357 A CN 202110376357A CN 113139444 A CN113139444 A CN 113139444A
Authority
CN
China
Prior art keywords
time
space
mask
wearing
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110376357.XA
Other languages
Chinese (zh)
Inventor
赵晓丽
尹明臣
陈正
张佳颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Engineering Science
Original Assignee
Shanghai University of Engineering Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Engineering Science filed Critical Shanghai University of Engineering Science
Priority to CN202110376357.XA priority Critical patent/CN113139444A/en
Publication of CN113139444A publication Critical patent/CN113139444A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a space-time attention mask wearing real-time detection method based on MobileNet V2, which comprises the following steps: collecting data samples of different persons wearing masks by using the Internet, and cleaning the data samples containing noise data to enable the data set to be a face image with qualified quality; expanding the data set; step three, dividing a data set; step four, after a tentorflow environment is configured on the server, deploying a MobilenetV2 target detection algorithm into a deep learning framework tentorflow; and step five, importing verification set data, utilizing a bottleneck structure in MobilenetV2 to construct an improved pyramid structure for feature extraction, adopting spatial attention to strengthen a face region for shallow features, adopting semantic features required by channel attention to strengthen high-level features to remove redundant semantic features, thus obtaining a new feature pyramid, gradually thinning the features in a gradual upsampling mode, and the like, thereby realizing an efficient face mask detection task.

Description

Space-time attention mask wearing real-time detection method based on MobileNet V2
Technical Field
The invention relates to the field of real-time detection of mask wearing, in particular to a space-time attention mask wearing real-time detection method based on MobileNet V2.
Background
The infectious virus is mainly transmitted through spray and contact at present, aerosol transmission can occur under certain special conditions, and the mask is worn correctly under common working and living conditions, so that the daily protection requirement is met. In the epidemic period, it is very important that the ordinary public correctly wear the mask when going out. The method has the advantages that the prevention and control work of infectious viruses is fully done, the virus propagation path is effectively cut off, the spreading potential of epidemic situations is firmly restrained, the life safety and the body health of people are ensured, a large number of epidemic prevention personnel and detection points are required to be additionally arranged in public places such as communities, schools, units, dining halls, stations and the like, whether the mask is worn by the personnel entering and exiting the public places or not is detected one by one, however, the detection method needs to consume a large amount of manual resources, and the situations of missed detection and the like can occur when the flow of people is large. And because the epidemic situation is highly contagious and the complexity of public places, the defects of high infection risk, large working strength, narrow coverage, poor real-time performance and the like inevitably exist only through manual inspection, and the automatic mask wearing condition detection by a machine is urgently needed. The automatic detection of wearing the mask is realized based on computer vision, a series of problems faced by manual detection can be effectively relieved, and the method has important research significance.
In recent years, research on related algorithms specially applied to face mask wearing detection is less, and a 'Retinaface' proposed by Deng et al, a 'natural scene mask wearing detection algorithm for improving retinaFace' proposed by cattle et al, and a mask wearing detection algorithm based on improved YoloV3 proposed by Wang et al under a 'complex scene' are influenced by complex factors such as shielding, density and small scale existing in a natural scene, so that the detection effect is not ideal.
At present, some mask wearing real-time detection methods aiming at the defects of complex calculation and large memory occupation of the conventional improved depth network model exist, but the method is low in general efficiency, and the used network model cannot be effectively applied to a mobile terminal on the premise of ensuring a good effect.
Disclosure of Invention
Therefore, the invention provides a space-time attention mask wearing real-time detection method based on MobileNet V2, which can effectively solve the technical problems of low detection efficiency caused by complex computation of a depth network model and large memory occupation in the prior art.
In order to achieve the purpose, the invention provides a space-time attention mask wearing real-time detection method based on MobileNet V2, which comprises the following steps:
collecting data samples of different persons wearing masks by using the Internet, and cleaning the data samples containing noise data to enable the data set to be a face image with qualified quality;
step two, carrying out random cutting, rotation and mirror image operation on the cleaned data set so as to expand the data set;
step three, randomly disorganizing the expanded data set by taking a folder as a basic unit, and then dividing a training set, a verification set and a test set according to the proportion of 85%, 5% and 10%;
step four, configuring a tensegrow environment on a server, and installing a package required by image processing to deploy a MobilenetV2 target detection algorithm into a deep learning framework tensegrow;
step five, importing verification set data, constructing an improved pyramid structure by utilizing bottleneck in MobilenetV2 for feature extraction, adopting space attention to strengthen a face region for shallow layer features, adopting channel attention to strengthen required semantic features to remove redundant semantic features for high layer features, thus obtaining a new feature pyramid, and gradually thinning features in a gradual up-sampling mode;
step six, importing the processed training set into an algorithm frame for training to obtain final characteristics, extracting and segmenting a face frame by using a trained face detector during training, and then carrying out secondary classification on a segmentation area to judge whether the face has a mask;
and step seven, testing the pictures, the videos and the real-time monitoring data in the test set respectively.
Further, the tests are test precision and recall, and the calculation formula is as follows:
precision=TP/(TP+FP);
recall=TP/(TP+FN);
in the formula, TP represents a correct prediction positive sample, FP represents a wrong prediction positive sample, TN represents a correct prediction negative sample, and FN represents a wrong prediction negative sample;
when the calculation is completed, calculating the average precision AP and the average precision mAP, wherein the calculation formula is as follows:
Figure BDA0003007582520000021
Figure BDA0003007582520000022
further, in order to evaluate the robustness of the method, mask detection is carried out on single people, multiple people, dense people, people with shielding in special situations, and people under real-time monitoring in video and natural scenes in a mask detection system.
Furthermore, the single person detection is to facilitate observation and analysis to improve the precision and operability of the model for the face mask detection; the multi-person test is used for verifying the generalization capability of the model; the shielding under the special condition comprises a covering opening and a covering mouth; the test in the video is performed to verify the effect of the wearing detection of the mask in the natural scene by the algorithm.
Further, the step one of collecting data by using the internet refers to crawling and downloading published data on the internet.
Further, in the first step, a bilateral filter function is adopted for cleaning the data sample containing the noise data.
Further, the principle of dividing the proportion in the third step is to divide by using a leave-out method.
Further, the feature extraction in the fifth step is dependent on convolution operation, and the concept of the local perception area is used to fuse the spatial information and the channel information to extract the features containing information.
Further, in the sixth step, the trained face detector is used for extracting the face frame and the face frame is segmented by adopting segmentation based on the anchor point; the second category includes wearing and not wearing masks.
Further, the step seven of performing the test employs inputting the test sample into the trained network for performing the test.
Compared with the prior art, the method has the advantages that aiming at the defects of complex calculation and large memory occupation of the conventional improved deep network model, the lightweight network published in recent years is researched, and is inspired by the MobileNet proposed by Google, the mobilenetV2 is used as a main network, and partial improvement is carried out on the basis of the main network to realize an efficient face mask detection task. The improved network model can be effectively applied to a mobile terminal, and meanwhile, a good effect is guaranteed.
Further, in view of the disadvantage of poor flexibility of the existing monitoring technology, the present invention uses a movable camera to detect and classify the targets. The movable camera has high flexibility, can monitor the target in real time under a large-scale scene, responds, and effectively solves the problems of poor flexibility, small monitoring area and the like of the fixed camera.
Drawings
FIG. 1 is a schematic flow chart of a space-time attention mask wearing real-time detection method based on MobileNet V2 according to the present invention;
FIG. 2 is a schematic diagram of a real-time detection network for wearing a spatiotemporal attention mask according to the present invention based on MobileNet V2;
FIG. 3 is a diagram of an inversed Residual Block convolution process of the invention, MobileNet V2;
FIG. 4 is a high-definition image after cleaning of a space-time attention mask wearing real-time detection method based on MobileNet V2 according to the present invention;
FIG. 5 is a diagram of a single person mask detection result of the space-time attention mask wearing real-time detection method based on MobileNet V2 according to the present invention;
FIG. 6 is a diagram showing the detection result of a multi-person mask by the space-time attention mask wearing real-time detection method based on MobileNet V2;
FIG. 7 is a mask detection result diagram of a special case of the space-time attention mask wearing real-time detection method based on MobileNet V2 according to the present invention;
FIG. 8 is a video mask detection result diagram in a natural scene of the space-time attention mask wearing real-time detection method based on MobileNet V2 according to the present invention;
fig. 9 is a diagram of mask detection results in real-time monitoring in a natural scene of the space-time attention mask wearing real-time detection method based on MobileNetV 2.
Detailed Description
In order that the objects and advantages of the invention will be more clearly understood, the invention is further described below with reference to examples; it should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are only for explaining the technical principle of the present invention, and do not limit the scope of the present invention.
It should be noted that in the description of the present invention, the terms of direction or positional relationship indicated by the terms "upper", "lower", "left", "right", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, which are only for convenience of description, and do not indicate or imply that the device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present invention.
Furthermore, it should be noted that, in the description of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
Referring to fig. 1-9, the present invention provides a space-time attention mask wearing real-time detection method based on MobileNetV2, comprising:
collecting data samples of different persons wearing masks by using the Internet, and cleaning the data samples containing noise data to enable the data set to be a face image with qualified quality;
step two, carrying out random cutting, rotation and mirror image operation on the cleaned data set so as to expand the data set;
step three, randomly disorganizing the expanded data set by taking a folder as a basic unit, and then dividing a training set, a verification set and a test set according to the proportion of 85%, 5% and 10%;
step four, configuring a tensegrow environment on a server, and installing a package required by image processing to deploy a MobilenetV2 target detection algorithm into a deep learning framework tensegrow;
step five, importing verification set data, constructing an improved pyramid structure by utilizing bottleneck in MobilenetV2 for feature extraction, adopting space attention to strengthen a face region for shallow layer features, adopting channel attention to strengthen required semantic features to remove redundant semantic features for high layer features, thus obtaining a new feature pyramid, and gradually thinning features in a gradual up-sampling mode;
step six, importing the processed training set into an algorithm frame for training to obtain final characteristics, extracting and segmenting a face frame by using a trained face detector during training, and then carrying out secondary classification on a segmentation area to judge whether the face has a mask;
and step seven, testing the pictures, the videos and the real-time monitoring data in the test set respectively.
In the embodiment of the invention, after the data set is expanded, the diversity of the sample can be enhanced, so that the model can keep higher generalization capability to prevent overfitting in training; the face with qualified quality means that the proportion of the face in the image is proper. The random cutting, rotation and mirror image operation of the cleaned data set are operated in codes, which is a conventional technical means in the field; the expanding the data set refers to enhancing the data; the purpose of random scrambling is to make data present certain randomness; the training set is used for learning the network, the verification set is used for finding out the best model parameter for learning, and the test set is used for checking the performance of the algorithm; the channel attention strengthening adopted for the high-rise features is an attention mechanism; the removing of redundant semantic features refers to increasing useful feature weights and decreasing useless feature weights in an attention mechanism; the judgment of whether the face has the mask can be directly obtained based on the algorithm of the invention.
Specifically, the tests are test precision and recall, and the calculation formula is as follows:
precision=TP/(TP+FP);
recall=TP/(TP+FN);
in the formula, TP represents a correct prediction positive sample, FP represents a wrong prediction positive sample, TN represents a correct prediction negative sample, and FN represents a wrong prediction negative sample;
when the calculation is completed, calculating the average precision AP and the average precision mAP, wherein the calculation formula is as follows:
Figure BDA0003007582520000061
Figure BDA0003007582520000062
the accuracy rate in the embodiment of the present invention indicates a ratio of a true positive sample in a sample predicted as a positive sample, the recall rate indicates a ratio of a positive sample successfully predicted as a positive sample, and the Average accuracy ap (Average precision), and the Average accuracy Average map (mean Average precision) indicates an Average of Average accuracies of all categories, which reflects an overall target detection effect.
Specifically, in order to evaluate the robustness of the method, mask detection is performed on single people, multiple people, dense people, people with occlusion under special conditions, and people under real-time monitoring in video and natural scenes in a mask detection system.
Specifically, the single person detection is to facilitate observation and analysis to improve the precision and operability of the model for face mask detection; the multi-person test is used for verifying the generalization capability of the model; the shielding under the special condition comprises a covering opening and a covering mouth; the test in the video is performed to verify the effect of the wearing detection of the mask in the natural scene by the algorithm.
Specifically, the step one of collecting data by using the internet refers to crawling and downloading published data on the internet.
Specifically, in the first step, a bilateral filter function is adopted to clean the data samples containing the noise data.
Specifically, the principle of dividing the ratio in the third step is to divide by using a leave-out method.
Specifically, the feature extraction in the fifth step is a convolution operation, and extracts features including information by fusing spatial information and channel information using the concept of the local perceptual area.
Specifically, in the sixth step, the trained face detector is used for extracting the face frame and the face frame is segmented by adopting segmentation based on the anchor point; the second category includes wearing and not wearing masks.
Specifically, the step seven of performing the test uses inputting the test sample into the trained network for performing the test.
Example 1
In order to verify real-time detection, the invention adopts an external camera to carry out real-time test, and obtains very good detection effect (as shown in figure 9).
Example 2
In order to verify the generalization of the model, the method is compared with the existing face mask detection algorithm, and as can be seen from table 1, for the target detection of the face mask, the algorithm of the invention achieves an obvious detection effect, and compared with the RetinaFace algorithm and the Attention-RetinaFace, the AP values are respectively increased by 21.3% and 13.1%, and the mAP values are respectively increased by 12.3% and 16.5%;
compared with YoloV3 and improved YoloV3, the AP values were increased by 17.3% and 2.4%, respectively, while the maps were increased by 15.1% and 4%, respectively. But for the detection of the human face target, the algorithm also obtains better results.
Table 1 experimental comparison data
Figure BDA0003007582520000071
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention; various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A space-time attention mask wearing real-time detection method based on MobileNet V2 is characterized by comprising the following steps:
collecting data samples of different persons wearing masks by using the Internet, and cleaning the data samples containing noise data to enable the data set to be a face image with qualified quality;
step two, carrying out random cutting, rotation and mirror image operation on the cleaned data set so as to expand the data set;
step three, randomly disorganizing the expanded data set by taking a folder as a basic unit, and then dividing a training set, a verification set and a test set according to the proportion of 85%, 5% and 10%;
step four, configuring a tensegrow environment on a server, and installing a package required by image processing to deploy a MobilenetV2 target detection algorithm into a deep learning framework tensegrow;
step five, importing verification set data, constructing an improved pyramid structure by utilizing bottleneck in MobilenetV2 for feature extraction, adopting space attention to strengthen a face region for shallow layer features, adopting channel attention to strengthen required semantic features to remove redundant semantic features for high layer features, thus obtaining a new feature pyramid, and gradually thinning features in a gradual up-sampling mode;
step six, importing the processed training set into an algorithm frame for training to obtain final characteristics, extracting and segmenting a face frame by using a trained face detector during training, and then carrying out secondary classification on a segmentation area to judge whether the face has a mask;
and step seven, testing the pictures, the videos and the real-time monitoring data in the test set respectively.
2. The space-time attention mask wearing real-time detection method based on the MobileNetV2 as claimed in claim 1, wherein the tests are precision and recall, and the calculation formula is as follows:
precision=TP/(TP+FP);
recall=TP/(TP+FN);
in the formula, TP represents a correct prediction positive sample, FP represents a wrong prediction positive sample, TN represents a correct prediction negative sample, and FN represents a wrong prediction negative sample;
when the calculation is completed, calculating the average precision AP and the average precision mAP, wherein the calculation formula is as follows:
Figure FDA0003007582510000011
Figure FDA0003007582510000021
3. the space-time attention mask wearing real-time detection method based on MobileNetV2 as claimed in claim 1, wherein in order to evaluate robustness of the method of the present invention, mask detection is performed in a mask detection system for single person, multiple persons, and dense persons, persons under occlusion in special cases, persons under real-time monitoring in video and natural scenes.
4. The space-time attention mask wearing real-time detection method based on MobileNetV2 as claimed in claim 3, wherein the single person detection is to improve the accuracy and operability of the model for face mask detection for easy observation and analysis; the multi-person test is used for verifying the generalization capability of the model; the shielding under the special condition comprises a covering opening and a covering mouth; the test in the video is performed to verify the effect of the wearing detection of the mask in the natural scene by the algorithm.
5. The method for detecting the wearing of the space-time attention mask based on the MobileNetV2 as claimed in claim 1, wherein the step one comprises the step of collecting published data by using the internet.
6. The space-time attention mask wearing real-time detection method based on the MobileNetV2 as claimed in claim 1, wherein the first step is to apply a bilateral filter function to clean data samples containing noise data.
7. The space-time attention mask wearing real-time detection method based on the MobileNet V2 as claimed in claim 1, wherein the division ratio in the third step is divided by a leave-out method.
8. The space-time attention mask wearing real-time detection method based on MobileNetV2 as claimed in claim 1, wherein the feature extraction in the fifth step is a convolution operation, and the concept of local perception area is used to extract features containing information by fusing spatial information and channel information.
9. The space-time attention mask wearing real-time detection method based on MobileNetV2 as claimed in claim 1, wherein the extraction of the face frame and the segmentation using the trained face detector in the sixth step are based on anchor point segmentation; the second category includes wearing and not wearing masks.
10. The method for detecting the wearing time of the space-time attention mask based on the MobileNet V2 as claimed in claim 1, wherein the step seven of the test is performed by inputting test samples into a trained network.
CN202110376357.XA 2021-04-06 2021-04-06 Space-time attention mask wearing real-time detection method based on MobileNet V2 Pending CN113139444A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110376357.XA CN113139444A (en) 2021-04-06 2021-04-06 Space-time attention mask wearing real-time detection method based on MobileNet V2

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110376357.XA CN113139444A (en) 2021-04-06 2021-04-06 Space-time attention mask wearing real-time detection method based on MobileNet V2

Publications (1)

Publication Number Publication Date
CN113139444A true CN113139444A (en) 2021-07-20

Family

ID=76810610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110376357.XA Pending CN113139444A (en) 2021-04-06 2021-04-06 Space-time attention mask wearing real-time detection method based on MobileNet V2

Country Status (1)

Country Link
CN (1) CN113139444A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114663912A (en) * 2022-02-25 2022-06-24 青岛图灵科技有限公司 Method and device for intelligently detecting whether dressing of police is standard, electronic equipment and storage medium
CN114782931A (en) * 2022-04-22 2022-07-22 电子科技大学 Driving behavior classification method for improved MobileNetv2 network
CN115600643A (en) * 2022-10-17 2023-01-13 中国科学技术大学(Cn) Method and system for rapidly predicting toxic gas

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364023A (en) * 2018-02-11 2018-08-03 北京达佳互联信息技术有限公司 Image-recognizing method based on attention model and system
CN110188817A (en) * 2019-05-28 2019-08-30 厦门大学 A kind of real-time high-performance street view image semantic segmentation method based on deep learning
CN110363137A (en) * 2019-07-12 2019-10-22 创新奇智(广州)科技有限公司 Face datection Optimized model, method, system and its electronic equipment
CN110516529A (en) * 2019-07-09 2019-11-29 杭州电子科技大学 It is a kind of that detection method and system are fed based on deep learning image procossing
CN111680637A (en) * 2020-06-10 2020-09-18 深延科技(北京)有限公司 Mask detection method and detection system based on deep learning and image recognition technology

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364023A (en) * 2018-02-11 2018-08-03 北京达佳互联信息技术有限公司 Image-recognizing method based on attention model and system
CN110188817A (en) * 2019-05-28 2019-08-30 厦门大学 A kind of real-time high-performance street view image semantic segmentation method based on deep learning
CN110516529A (en) * 2019-07-09 2019-11-29 杭州电子科技大学 It is a kind of that detection method and system are fed based on deep learning image procossing
CN110363137A (en) * 2019-07-12 2019-10-22 创新奇智(广州)科技有限公司 Face datection Optimized model, method, system and its electronic equipment
CN111680637A (en) * 2020-06-10 2020-09-18 深延科技(北京)有限公司 Mask detection method and detection system based on deep learning and image recognition technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
牛作东,覃涛,李捍东,陈进军: "改进 RetinaFace的自然场景口罩佩戴检测算法", 《计算机工程与应用》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114663912A (en) * 2022-02-25 2022-06-24 青岛图灵科技有限公司 Method and device for intelligently detecting whether dressing of police is standard, electronic equipment and storage medium
CN114782931A (en) * 2022-04-22 2022-07-22 电子科技大学 Driving behavior classification method for improved MobileNetv2 network
CN114782931B (en) * 2022-04-22 2023-09-29 电子科技大学 Driving behavior classification method for improving mobilenet v2 network
CN115600643A (en) * 2022-10-17 2023-01-13 中国科学技术大学(Cn) Method and system for rapidly predicting toxic gas

Similar Documents

Publication Publication Date Title
CN113139444A (en) Space-time attention mask wearing real-time detection method based on MobileNet V2
CN109819208B (en) Intensive population security monitoring management method based on artificial intelligence dynamic monitoring
CN108090458B (en) Human body falling detection method and device
CN111881726B (en) Living body detection method and device and storage medium
CN112085010A (en) Mask detection and deployment system and method based on image recognition
CN109858371A (en) The method and device of recognition of face
CN111091110A (en) Wearing identification method of reflective vest based on artificial intelligence
CN105022999A (en) Man code company real-time acquisition system
CN112115775A (en) Smoking behavior detection method based on computer vision in monitoring scene
CN109829997A (en) Staff attendance method and system
RU2713876C1 (en) Method and system for detecting alarm events when interacting with self-service device
CN113657150A (en) Fall detection method and device and computer readable storage medium
CN112257643A (en) Smoking behavior and calling behavior identification method based on video streaming
Nadeem et al. A survey of artificial intelligence and internet of things (IoT) based approaches against COVID-19
Zhu et al. Towards automatic wild animal detection in low quality camera-trap images using two-channeled perceiving residual pyramid networks
CN113887318A (en) Embedded power violation detection method and system based on edge calculation
Chachere et al. Real Time Face Mask Detection by using CNN
CN113506274B (en) Detection system for human cognitive condition based on visual saliency difference map
CN103986882A (en) Method for image classification, transmission and processing in real-time monitoring system
Gupta et al. Multilevel Face Mask Detection System using Ensemble based Convolution Neural Network
CN109684990A (en) A kind of behavioral value method of making a phone call based on video
CN113052139A (en) Deep learning double-flow network-based climbing behavior detection method and system
Limbasiya et al. COVID-19 face mask and social distancing detector using machine learning
CN115049875A (en) Detection method for wearing insulating gloves in transformer substation based on deep learning
CN106846527B (en) A kind of attendance checking system based on recognition of face

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210720

RJ01 Rejection of invention patent application after publication