CN113591717A - Non-motor vehicle helmet wearing detection method based on improved YOLOv3 algorithm - Google Patents

Non-motor vehicle helmet wearing detection method based on improved YOLOv3 algorithm Download PDF

Info

Publication number
CN113591717A
CN113591717A CN202110876557.1A CN202110876557A CN113591717A CN 113591717 A CN113591717 A CN 113591717A CN 202110876557 A CN202110876557 A CN 202110876557A CN 113591717 A CN113591717 A CN 113591717A
Authority
CN
China
Prior art keywords
motor vehicle
improved yolov3
helmet
image
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110876557.1A
Other languages
Chinese (zh)
Inventor
郑水华
徐逸伦
孙泽楠
林伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202110876557.1A priority Critical patent/CN113591717A/en
Publication of CN113591717A publication Critical patent/CN113591717A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a non-motor vehicle helmet wearing detection method based on an improved YOLOv3 algorithm, which comprises the following steps: acquiring a traffic monitoring video stream, processing, enhancing data and establishing a training data set; constructing an improved YOLOv3 target detection model; training an improved YOLOv3 algorithm model by using a training data set, and loading the trained optimal weight file to the model to obtain a non-motor helmet detection network; and reading in a traffic monitoring video frame image to be detected, and outputting a corresponding helmet detection result by adopting the non-motor vehicle helmet detection network. The invention solves the problem of low efficiency of the traditional manual detection method, and improves the wearing detection speed and accuracy of the non-motor vehicle helmet.

Description

Non-motor vehicle helmet wearing detection method based on improved YOLOv3 algorithm
Technical Field
The invention relates to the technical field of target detection of computer vision, in particular to a non-motor vehicle helmet wearing detection method based on an improved YOLOv3 algorithm.
Background
With the continuous development of computer vision related technology, the target detection technology has been widely applied in the industrial field, and road traffic monitoring is an important application field. In recent years, accidents caused by collision between a non-motor vehicle and a motor vehicle frequently occur, and the helmet is used as the only safety device for non-motor vehicle riders, so that the injuries suffered by the non-motor vehicle riders in traffic accidents can be effectively reduced. To ensure safety of a non-motorist on a roadway, it is necessary to detect whether the non-motorist is wearing a helmet on the roadway. However, the rapid increase of the number of vehicles on the road brings new challenges to the detection method, and the traditional manual detection method has low efficiency and low speed. YOLOv3 is a new network model structure in the recent target detection field, can predict the type and position of a detection object at the same time, treats target detection as a simple regression problem, and is a real-time detection method. Therefore, the design of the non-motor vehicle helmet wearing detection method based on the improved YOLOv3 algorithm has great significance.
Disclosure of Invention
Aiming at the problems in the prior art, the invention aims to provide a non-motor helmet wearing detection method based on an improved YOLOv3 algorithm, which can realize the detection of the helmet wearing condition of a non-motor rider and improve the speed and the precision of the non-motor helmet wearing detection.
The technical scheme of the invention is as follows:
a non-motor vehicle helmet wearing detection method based on an improved YOLOv3 algorithm comprises the following steps:
1) establishing a training data set by a data enhancement method on the basis of a video stream provided by a traffic monitoring device;
2) constructing an improved YOLOv3 target detection algorithm model;
3) sending the training data set into an improved YOLOv3 algorithm model for training until the model converges, namely the loss function value of the model is lower than a preset threshold value;
4) inputting a real-time traffic monitoring video stream into a trained improved YOLOv3 algorithm model, detecting the positions of non-motor vehicles, riders and non-motor vehicle safety helmets in the video stream to determine whether the non-motor vehicle riders wear the helmets, and marking if the non-motor vehicle riders do not wear the helmets; overload flags are made when more than two riders are detected on the same non-motorized vehicle.
Further, in the step 1), a road traffic video recorded by a traffic monitoring device is used as sample data, and the resolution of the road traffic video is 1920x 1080; the method comprises the steps of converting sample data into an image sequence according to 25 frames per second, intercepting a video image as an image data set every 10 frames, eliminating images which do not contain non-motor vehicles, and carrying out data expansion on the obtained images by a data enhancement method, wherein the data enhancement method specifically comprises image rotation, target shielding and analog noise increasing, the image rotation refers to rotating an original image by 90 degrees, 180 degrees and 270 degrees clockwise to obtain new images, the target shielding refers to shielding different parts of a target object in the original image by setting black rectangular blocks to obtain new images, and the analog noise increasing refers to adding rain fog analog noise to the original image by using an existing rain fog noise algorithm. And for the training data set obtained by data enhancement, performing multi-label labeling by using LabelImg software, and automatically generating a corresponding marking file in an xml format, wherein the marking file comprises the object name and the coordinate information of a boundary box, and the types of the marking file are non-motor vehicles, riders and non-motor vehicle helmets.
Further, the step 2) of constructing the improved YOLOv3 model includes a feature extraction network module, a spatial pyramid module, a feature fusion module and a multi-classifier module. The feature extraction network module adopts the existing Darknet53 network, inputs a three-channel original image with the size of 256x256, and the Darknet53 network outputs three feature maps with the sizes of 32x32, 16x16 and 8x8 respectively; the spatial pyramid module performs maximum pooling operation on the feature maps with the size of 8x8 output by the feature extraction network module and splices the feature maps to obtain pooled feature maps; the feature fusion module performs concat operation on the pooled feature map and the feature map obtained by the feature extraction network module to complete fusion; and the multi-classifier module performs classification detection on the feature map obtained by the feature fusion module by adopting a logistic function to obtain a final target detection result.
Further, the loss function J (θ, X, Y) of the improved YOLOv3 target detection algorithm model in step 2) is replaced by a loss function loss, and the expression is shown as the following formula:
Figure BDA0003190510800000021
in the formula lambdacoordWeight coefficient, λ, representing coordinate lossnobjWeight coefficient, s, representing mesh prediction class2Indicating the number of meshes of the image division, B indicating the number of prediction frames contained in each mesh, when the jth prediction frame of the ith mesh is a correct prediction frame,
Figure BDA0003190510800000022
the number of the carbon atoms is 1,
Figure BDA0003190510800000023
is 0, otherwise
Figure BDA0003190510800000024
Is a non-volatile organic compound (I) with a value of 0,
Figure BDA0003190510800000025
is 1. x is the number ofi、yiRespectively representing the coordinates of the real annotated central point of the target object for which the ith network is responsible,
Figure BDA0003190510800000026
coordinates of the center point of the prediction box representing the target object responsible for the ith network. Omegai、hiWidth representing the true label of the target object for which the ith network is responsible,The length of the first and second support members,
Figure BDA0003190510800000027
the width and length of the prediction box of the target object in charge of the ith network are shown. CiRepresenting the real classification result of the target object for which the ith network is responsible,
Figure BDA0003190510800000028
representing the predicted classification result of the target object for which the ith network is responsible. p is a radical ofi(c) Representing the true probability that the target object for which the ith network is responsible belongs to classification category c,
Figure BDA0003190510800000029
and the class represents the predicted probability that the target object responsible for the ith network belongs to the classification class c, and the class represents the class label set of the target object.
Further, in the step 3), the xml markup file in the training data set is analyzed to obtain a train.txt file and a val.txt file, the improved YOLOv3 algorithm model is iteratively trained by using the training data set, and when the loss function loss value of the model is smaller than a preset threshold value, the training is stopped, and the model parameters are saved.
Further, the step 4) comprises the following steps:
4.1) converting the real-time traffic monitoring video stream into a video frame image sequence at 25 frames per second, and intercepting a video image every 10 frames and sending the video image into an improved YOLOv3 algorithm model;
4.2) the improved YOLOv3 algorithm model detects non-motor vehicles, riders and helmet areas in the image;
4.3) for the same non-motor vehicle area, if more than two rider areas are detected to be overlapped with the non-motor vehicle area, the non-motor vehicle is overloaded, the overloaded non-motor vehicle and the rider areas are cut according to the area coordinates obtained by detection, and overload marking is carried out;
4.4) counting the number of helmet areas detected in the image area of the non-overloaded non-motor vehicle, and if the number of the helmet areas is inconsistent with the number of the rider areas, indicating that the non-motor vehicle has a rider without wearing a safety helmet, and marking.
The invention has the beneficial effects that:
1) the model has higher detection accuracy and better detection effect on the detection of the non-motor vehicle helmet.
2) The model has better generalization capability, and simulates the video images containing noise acquired in various rain and fog weather environments by a data enhancement method, so that the model can better learn the law behind the training data, has better target detection capability on unknown video images, and effectively avoids over-fitting and under-fitting of the model.
3) The model has good popularity, can be widely applied to various video acquisition equipment, and can obtain good detection accuracy for video images with low definition captured by old video acquisition equipment.
Drawings
FIG. 1 is a schematic flow diagram of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings:
as shown in fig. 1, a non-motor vehicle helmet wearing detection method based on the improved YOLOv3 algorithm comprises the following steps:
1) establishing a training data set by a data enhancement method on the basis of a video stream provided by a traffic monitoring device;
step 1) using a road traffic video recorded by traffic monitoring equipment as sample data, wherein the resolution of the road traffic video is 1920x 1080; the method comprises the steps of converting sample data into an image sequence according to 25 frames per second, intercepting a video image as an image data set every 10 frames, eliminating images which do not contain non-motor vehicles, and carrying out data expansion on the obtained images by a data enhancement method, wherein the data enhancement method specifically comprises image rotation, target shielding and analog noise increasing, the image rotation refers to rotating an original image by 90 degrees, 180 degrees and 270 degrees clockwise to obtain new images, the target shielding refers to shielding different parts of a target object in the original image by setting black rectangular blocks to obtain new images, and the analog noise increasing refers to adding rain fog analog noise to the original image by using an existing rain fog noise algorithm. And for the training data set obtained by data enhancement, performing multi-label labeling by using LabelImg software, and automatically generating a corresponding marking file in an xml format, wherein the marking file comprises the object name and the coordinate information of a boundary box, and the types of the marking file are non-motor vehicles, riders and non-motor vehicle helmets.
2) Constructing an improved YOLOv3 target detection algorithm model;
and 2) constructing an improved YOLOv3 model comprising a feature extraction network module, a spatial pyramid module, a feature fusion module and a multi-classifier module. The feature extraction network module adopts the existing Darknet53 network, inputs a three-channel original image with the size of 256x256, and the Darknet53 network outputs three feature maps with the sizes of 32x32, 16x16 and 8x8 respectively; the spatial pyramid module performs maximum pooling operation on the feature maps with the size of 8x8 output by the feature extraction network module and splices the feature maps to obtain pooled feature maps; the feature fusion module performs concat operation on the pooled feature map and the feature map obtained by the feature extraction network module to complete fusion; the multi-classifier module performs classification detection on the feature map obtained by the feature fusion module by adopting a logistic function to obtain a final target detection result;
step 2) the loss function J (theta, X, Y) of the improved YOLOv3 target detection algorithm model is replaced by a loss function loss, and the expression is shown as the following formula:
Figure BDA0003190510800000041
in the formula lambdacoordWeight coefficient, λ, representing coordinate lossnobjWeight coefficient, s, representing mesh prediction class2Indicating the number of meshes of the image division, B indicating the number of prediction frames contained in each mesh, when the jth prediction frame of the ith mesh is a correct prediction frame,
Figure BDA0003190510800000042
the number of the carbon atoms is 1,
Figure BDA0003190510800000043
is 0, otherwise
Figure BDA0003190510800000044
Is a non-volatile organic compound (I) with a value of 0,
Figure BDA0003190510800000045
is 1. x is the number ofi、yiRespectively representing the coordinates of the real annotated central point of the target object for which the ith network is responsible,
Figure BDA0003190510800000046
coordinates of the center point of the prediction box representing the target object responsible for the ith network. Omegai、hiRepresenting the width, length of the true label of the target object for which the ith network is responsible,
Figure BDA0003190510800000047
the width and length of the prediction box of the target object in charge of the ith network are shown. CiRepresenting the real classification result of the target object for which the ith network is responsible,
Figure BDA0003190510800000048
representing the predicted classification result of the target object for which the ith network is responsible. p is a radical ofi(c) Representing the true probability that the target object for which the ith network is responsible belongs to classification category c,
Figure BDA0003190510800000049
and the class represents the predicted probability that the target object responsible for the ith network belongs to the classification class c, and the class represents the class label set of the target object.
3) Sending the training data set into an improved YOLOv3 algorithm model for training until the model converges, namely the loss function value of the model is lower than a preset threshold value;
and 3) analyzing the xml label file in the training data set to obtain a train.txt file and a val.txt file, performing iterative training on the improved YOLOv3 algorithm model by using the training data set, stopping training when the loss function loss value of the model is smaller than a preset threshold value, and storing model parameters.
4) Inputting a real-time traffic monitoring video stream into a trained improved YOLOv3 algorithm model, detecting the positions of non-motor vehicles, riders and non-motor vehicle safety helmets in the video stream to determine whether the non-motor vehicle riders wear the helmets, and marking if the non-motor vehicle riders do not wear the helmets; when more than two riders are detected on the same non-motor vehicle, overload marking is carried out;
the step 4) comprises the following steps:
4.1) converting the real-time traffic monitoring video stream into a video frame image sequence at 25 frames per second, and intercepting a video image every 10 frames and sending the video image into an improved YOLOv3 algorithm model;
4.2) the improved YOLOv3 algorithm model detects non-motor vehicles, riders and helmet areas in the image;
4.3) for the same non-motor vehicle area, if more than two rider areas are detected to be overlapped with the non-motor vehicle area, the non-motor vehicle is overloaded, the overloaded non-motor vehicle and the rider areas are cut according to the area coordinates obtained by detection, and overload marking is carried out;
4.4) counting the number of helmet areas detected in the image area of the non-overloaded non-motor vehicle, and if the number of the helmet areas is inconsistent with the number of the rider areas, indicating that the non-motor vehicle has a rider without wearing a safety helmet, and marking.
The above embodiments are only preferred embodiments of the present invention, and are not intended to limit the technical solutions of the present invention, so long as the technical solutions can be realized on the basis of the above embodiments without creative efforts, which should be considered to fall within the protection scope of the patent of the present invention.

Claims (6)

1. A non-motor vehicle helmet wearing detection method based on an improved YOLOv3 algorithm is characterized by comprising the following steps:
1) establishing a training data set by a data enhancement method on the basis of a video stream provided by a traffic monitoring device;
2) constructing an improved YOLOv3 target detection algorithm model;
3) sending the training data set into an improved YOLOv3 algorithm model for training until the model converges, namely the loss function value of the model is lower than a preset threshold value;
4) inputting a real-time traffic monitoring video stream into a trained improved YOLOv3 algorithm model, detecting the positions of non-motor vehicles, riders and non-motor vehicle safety helmets in the video stream to determine whether the non-motor vehicle riders wear the helmets, and marking if the non-motor vehicle riders do not wear the helmets; overload flags are made when more than two riders are detected on the same non-motorized vehicle.
2. The method for detecting the wearing of a non-motor vehicle helmet based on the improved YOLOv3 algorithm as claimed in claim 1, wherein: the method comprises the following steps that 1) road traffic video recorded by traffic monitoring equipment is used as sample data, and the resolution of the road traffic video is 1920x 1080; converting sample data into an image sequence by 25 frames per second, intercepting a video image as an image data set every 10 frames, removing images which do not contain non-motor vehicles, and performing data expansion on the obtained images by a data enhancement method, wherein the data enhancement method specifically comprises image rotation, target shielding and simulated noise increase, the image rotation refers to rotating an original image by 90 degrees, 180 degrees and 270 degrees clockwise to obtain new images, the target shielding refers to shielding different parts of a target object in the original image by setting black rectangular blocks to obtain new images, the simulated noise increase refers to adding rain fog simulated noise to the original image by using the existing rain fog noise algorithm, and for a training data set obtained by data enhancement, performing multi-label labeling by using LabelImg software to automatically generate a corresponding xml format labeling file, the method comprises the object name and coordinate information of a boundary box, and the categories of the object name and the coordinate information are non-motor vehicles, riders and non-motor vehicle helmets.
3. The method for detecting the wearing of a non-motor vehicle helmet based on the improved YOLOv3 algorithm as claimed in claim 1, wherein: the step 2) constructing an improved YOLOv3 model, which comprises a feature extraction network module, a spatial pyramid module, a feature fusion module and a multi-classifier module; the feature extraction network module adopts a Darknet53 network, three-channel original images with the size of 256x256 are input, and a Darknet53 network outputs three feature graphs with the sizes of 32x32, 16x16 and 8x8 respectively; the spatial pyramid module performs maximum pooling operation on the feature maps with the size of 8x8 output by the feature extraction network module and splices the feature maps to obtain pooled feature maps; the feature fusion module performs concat operation on the pooled feature map and the feature map obtained by the feature extraction network module to complete fusion; and the multi-classifier module performs classification detection on the feature map obtained by the feature fusion module by adopting a logistic function to obtain a final target detection result.
4. The method for detecting the wearing of a non-motor vehicle helmet based on the improved YOLOv3 algorithm as claimed in claim 1, wherein: the loss function J (theta, X, Y) of the improved YOLOv3 target detection algorithm model in the step 2) is replaced by a loss function loss, and the expression is shown as the following formula:
Figure FDA0003190510790000021
in the formula lambdacoordWeight coefficient, λ, representing coordinate lossnobjWeight coefficient, s, representing mesh prediction class2Indicating the number of meshes of the image division, B indicating the number of prediction frames contained in each mesh, when the jth prediction frame of the ith mesh is a correct prediction frame,
Figure FDA0003190510790000022
the number of the carbon atoms is 1,
Figure FDA0003190510790000023
is 0, otherwise
Figure FDA0003190510790000024
Is a non-volatile organic compound (I) with a value of 0,
Figure FDA0003190510790000025
is 1, xi、yiRespectively representing the coordinates of the real annotated central point of the target object for which the ith network is responsible,
Figure FDA0003190510790000026
coordinates of center point, ω, representing the predicted frame of the target object responsible by the ith networki、hiRepresenting the width, length of the true label of the target object for which the ith network is responsible,
Figure FDA0003190510790000027
width, length, C, of a prediction box representing the target object responsible for the ith networkiRepresenting the real classification result of the target object for which the ith network is responsible,
Figure FDA0003190510790000028
representing the predicted classification result, p, of the target object for which the ith network is responsiblei(c) Representing the true probability that the target object for which the ith network is responsible belongs to classification category c,
Figure FDA0003190510790000029
and the class represents the predicted probability that the target object responsible for the ith network belongs to the classification class c, and the class represents the class label set of the target object.
5. The method for detecting the wearing of a non-motor vehicle helmet based on the improved YOLOv3 algorithm as claimed in claim 1, wherein: and 3) analyzing the xml label file in the training data set to obtain a train.txt file and a val.txt file, performing iterative training on the improved YOLOv3 algorithm model by using the training data set, stopping training when the loss function loss value of the model is smaller than a preset threshold value, and storing model parameters.
6. The method for detecting the wearing of a non-motor vehicle helmet based on the improved YOLOv3 algorithm as claimed in claim 1, wherein: the step 4) comprises the following steps:
4.1) converting the real-time traffic monitoring video stream into a video frame image sequence at 25 frames per second, and intercepting a video image every 10 frames and sending the video image into an improved YOLOv3 algorithm model;
4.2) the improved YOLOv3 algorithm model detects non-motor vehicles, riders and helmet areas in the image;
4.3) for the same non-motor vehicle area, if more than two rider areas are detected to be overlapped with the non-motor vehicle area, the non-motor vehicle is overloaded, the overloaded non-motor vehicle and the rider areas are cut according to the area coordinates obtained by detection, and overload marking is carried out;
4.4) counting the number of helmet areas detected in the image area of the non-overloaded non-motor vehicle, and if the number of the helmet areas is inconsistent with the number of the rider areas, indicating that the non-motor vehicle has a rider without wearing a safety helmet, and marking.
CN202110876557.1A 2021-07-31 2021-07-31 Non-motor vehicle helmet wearing detection method based on improved YOLOv3 algorithm Withdrawn CN113591717A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110876557.1A CN113591717A (en) 2021-07-31 2021-07-31 Non-motor vehicle helmet wearing detection method based on improved YOLOv3 algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110876557.1A CN113591717A (en) 2021-07-31 2021-07-31 Non-motor vehicle helmet wearing detection method based on improved YOLOv3 algorithm

Publications (1)

Publication Number Publication Date
CN113591717A true CN113591717A (en) 2021-11-02

Family

ID=78253511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110876557.1A Withdrawn CN113591717A (en) 2021-07-31 2021-07-31 Non-motor vehicle helmet wearing detection method based on improved YOLOv3 algorithm

Country Status (1)

Country Link
CN (1) CN113591717A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113903009A (en) * 2021-12-10 2022-01-07 华东交通大学 Railway foreign matter detection method and system based on improved YOLOv3 network
CN114120366A (en) * 2021-11-29 2022-03-01 上海应用技术大学 Non-motor vehicle helmet detection method based on generation countermeasure network and yolov5
CN114155428A (en) * 2021-11-26 2022-03-08 中国科学院沈阳自动化研究所 Underwater sonar side-scan image small target detection method based on Yolo-v3 algorithm

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155428A (en) * 2021-11-26 2022-03-08 中国科学院沈阳自动化研究所 Underwater sonar side-scan image small target detection method based on Yolo-v3 algorithm
CN114120366A (en) * 2021-11-29 2022-03-01 上海应用技术大学 Non-motor vehicle helmet detection method based on generation countermeasure network and yolov5
CN114120366B (en) * 2021-11-29 2023-08-25 上海应用技术大学 Non-motor helmet detection method based on generation of countermeasure network and yolov5
CN113903009A (en) * 2021-12-10 2022-01-07 华东交通大学 Railway foreign matter detection method and system based on improved YOLOv3 network

Similar Documents

Publication Publication Date Title
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN113591717A (en) Non-motor vehicle helmet wearing detection method based on improved YOLOv3 algorithm
CN111886609B (en) System and method for reducing data storage in machine learning
CN107563372B (en) License plate positioning method based on deep learning SSD frame
CN103413444B (en) A kind of traffic flow based on unmanned plane HD video is investigated method
Li et al. Stepwise domain adaptation (SDA) for object detection in autonomous vehicles using an adaptive CenterNet
Lin et al. A Real‐Time Vehicle Counting, Speed Estimation, and Classification System Based on Virtual Detection Zone and YOLO
CN107316016A (en) A kind of track of vehicle statistical method based on Hadoop and monitoring video flow
CN109993138A (en) A kind of car plate detection and recognition methods and device
EP2813973B1 (en) Method and system for processing video image
Kavitha et al. Pothole and object detection for an autonomous vehicle using yolo
CN111091023A (en) Vehicle detection method and device and electronic equipment
CN111209923A (en) Deep learning technology-based muck truck cover or uncover identification method
KC Enhanced pothole detection system using YOLOX algorithm
Yebes et al. Learning to automatically catch potholes in worldwide road scene images
CN116597270A (en) Road damage target detection method based on attention mechanism integrated learning network
Kharel et al. Potholes detection using deep learning and area estimation using image processing
Kausar et al. Two-wheeled vehicle detection using two-step and single-step deep learning models
CN112785610B (en) Lane line semantic segmentation method integrating low-level features
CN115861957B (en) Novel dynamic object segmentation method based on sensor fusion
Chen et al. Lane detection algorithm based on inverse perspective mapping
CN110765900A (en) DSSD-based automatic illegal building detection method and system
Li et al. An improved lightweight network based on yolov5s for object detection in autonomous driving
CN115909241A (en) Lane line detection method, system, electronic device and storage medium
CN113392695B (en) Highway truck and wheel axle identification method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20211102