CN114723992A - Improved vehicle detection and identification method based on YOLOv5 - Google Patents

Improved vehicle detection and identification method based on YOLOv5 Download PDF

Info

Publication number
CN114723992A
CN114723992A CN202210352291.5A CN202210352291A CN114723992A CN 114723992 A CN114723992 A CN 114723992A CN 202210352291 A CN202210352291 A CN 202210352291A CN 114723992 A CN114723992 A CN 114723992A
Authority
CN
China
Prior art keywords
layer
output
network
vehicle detection
connection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210352291.5A
Other languages
Chinese (zh)
Inventor
张开玉
苏雪梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN202210352291.5A priority Critical patent/CN114723992A/en
Publication of CN114723992A publication Critical patent/CN114723992A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an improved vehicle detection and identification method based on YOLOv5, which has the scheme that: (1) pre-processing the published vehicle detection data set BDD100 k; (2) a Dense Block module is introduced, and characteristic information is multiplexed in a Dense Block network Dense connection mode, so that the problem of characteristic information loss is relieved; (3) adding a Transformer network, and enhancing effective characteristics through a Multi-head Attention mechanism (Multi-head Attention) in the Transformer; (4) and training and testing the improved network. The invention obtains good detection effect on the public vehicle detection data set BDD100k, improves the detection precision of the vehicle, can be used in the field of unmanned driving and brings convenience to people.

Description

Improved vehicle detection and identification method based on YOLOv5
Technical Field
The invention relates to the field of computer vision and image processing, in particular to an improved vehicle detection and identification method based on YOLOv 5.
Background
In recent years, as the traffic jam of roads is increased due to the increase of the holding amount of vehicles, the number of traffic accidents is increased year by year, and the number of casualties and economic losses are increased. In order to reduce the occurrence of traffic accidents, Advanced Driver Assistance Systems (ADAS) have been developed. The vehicle target detection is used as a core technology of an advanced assistant driving system (ADAS), has very important function for ensuring traffic safety, and has extremely high requirements on the aspects of precision, speed, environmental robustness and the like due to the driving safety, so that the detection performance is improved, and the convenience in terminal deployment is an important premise for promoting the development of an automatic driving technology and advanced auxiliary equipment.
The current major target detection algorithms are divided into a Two-Stage (Two-Stage) detection algorithm represented by Fast-RCNN and a One-Stage (One-Stage) detection algorithm represented by YOLO and SSD. The Two-Stage algorithm firstly extracts a candidate frame from an image and then positions and classifies a candidate area by using a convolutional neural network, and the detection algorithm has high precision but cannot achieve real-time detection. Compared with a two-Stage algorithm, the One-Stage algorithm does not need to generate a candidate box, and the object classification score is given while position regression calculation is directly carried out on the target object. The method has higher detection speed, but still has the problems of small target detection visual field range, low detection precision and the like for detecting small targets. In practical application, a balance point with the best precision and speed needs to be found, so that higher practical value can be achieved in practical application by improving the performance of a one-stage detection algorithm.
With the continuous improvement of the YOLO series algorithm, the YOLO 5 algorithm is widely applied to the target detection task with high detection accuracy and speed. However, the detection background in the traffic scene is very complex and is likely to be confused with background information, and since the information extracted from the shallow features is less, it is difficult to accurately classify and accurately position the small-scale vehicles. Meanwhile, due to the fact that shielding exists between vehicles, extracted characteristic information is insufficient, and missing detection of the target vehicle is caused.
To solve the above problems, the present invention proposes an improved vehicle detection and recognition method based on YOLOv5, which utilizes the public vehicle detection data set BDD100k to train and test the improved network. A Dense Block module is introduced into the original YOLOv5 network backbone, and a transform network is added, so that the problem of loss of characteristic information flow can be relieved, shallow layer characteristics can be extracted more finely, vehicle missing detection is reduced, and vehicle detection precision is improved.
Disclosure of Invention
1. Objects of the invention
The invention provides a vehicle target detection and identification method based on YOLOv5 improvement for improving the detection accuracy of a YOLOv5 algorithm on small target vehicles.
2. In order to realize the purpose, the invention is realized by the following technical scheme:
the invention provides an improved vehicle target detection and identification method based on YOLOv5, which comprises the following specific steps:
(1) the vehicle detection data set BDD100k is acquired and the data set is preprocessed.
(1-1) the data set contained 10000 pictures. Due to the specific application scenario of the present invention, several categories of Light, Sign, Person, Bike, and Rider in the BDD100k dataset are ignored. And converting the json format of the corresponding label of each picture into an xml file through a python script. The xml file contains a picture name, a picture path, a target tag name, and target location coordinates.
(1-2) unifying Bus, Truck, Motor, Car and Train into Car class. And storing the pictures and the tag xml files after the unified category according to the VOC format.
Three folders of exceptions, JPEGImages, ImageSets are created under the VOC folder for storing pictures and labels. The pointers folder stores each tag xml file, and the JPEGImages folder stores all pictures.
(1-3) using python script, randomly arranging the created VOC-formatted data set according to 8: 2 into a training set and a test set. While converting the dataset tag format to yolo compliant txt format.
(2) Based on the idea of Dense connection of the Dense Net network, a Dense Block module is replaced by a residual network structure Resunit in two CSP1_3 modules in a backbone network.
Compared with the prior CSP1_3 module, the method has the advantages that the recovery structure only adds and connects the layer with the previous layer, so that the current layer does not completely receive the vehicle characteristic information extracted by the previous layer, and some key information is easily missed. The improved CSP1_3 module densely connects the convolution layers, so that not only can the model be prevented from being degraded, but also the characteristics can be reused, and the model can be converged more quickly.
(3) And adding a Transformer network after the SPP module of the backbone network.
The addition of the Transformer mainly comprises the steps of obtaining vehicle characteristic information more finely through a Transformer multi-head self attention mechanism (multihead attention), enhancing the characteristic extraction capability and adding the convergence speed of a block model. A Multi-Head Self-Attention mechanism (Multi-Head Attention) in a transform is composed of a plurality of Self-Attention mechanisms, and the Self-Attention mechanisms can help a current layer not only focus on current characteristics, but also acquire global characteristic information.
(4) Training and testing the improved network:
setting the iteration time epoch to be 300, the batch-size to be 4 and the initial learning rate to be 0.01, and after 300 times of iterative training, keeping the loss value and the precision to be stable and storing the optimal model parameters at the moment.
Preferably, the step (2) replaces residual network structure units in two CSP1_3 modules in the backbone network with a sense Block module:
the Dense Block module includes three convolution layers, each convolution layer including (BN + ReLU + 1. times.1 Conv) + (BN + ReLU + 3. times.3 Conv). And setting the growth rate k in the Dense Block to be 32, ensuring that only 32 channels are output after each layer is convolved by 3 x 3, splicing the channels with the input of the front layer, and taking the spliced channels as the input of the next layer.
The 3-channel 640 picture is input, output after Focus slicing operation (3, 32, 320, 320), post-connection Conv output (32, 64, 160, 160), connection layer BottleneccCSP 1-1 output (64, 64, 160, 160), connection Conv layer output (64, 128, 80, 80), connection DenseBlockCSP1_3 layer output (128, 128, 80, 80), connection layer Conv output (1128, 256, 40, 40), connection layer Cov CSP1_3 output (256, 256, 40, 40), connection layer Conv output (256, 512, 20, 20), connection layer SPP output (512, 512, 20, 20).
Preferably, the step (3) adds a Transformer module to the backbone network:
and outputting (512, 512, 20, 20) through the SPP layer. Then, a Transformer layer is connected, in the Transformer, data firstly passes through a Multi-Head attribute module which comprises 8 self-attributes, and 8 weighted feature matrixes Z are obtainediI ∈ {1, 2.., 8 }. Will be 8ZiA large feature matrix Z is formed by column. After being output by the Multi-Head attribute, the Multi-Head attribute is output by a full-connected layer (Feed Forward Neural Network), the full-connected layer is connected with two layers, the activation function of the first layer is ReLU, the second layer is a linear activation function, and finally the Multi-Head attribute is output by a transform layer (512, 512, 20, 20). And finally, performing feature fusion on the output of the backbone network through the hack part.
Preferably, in the step (4), the iteration time epoch is set to be 300, the batch-size is set to be 4, the initial learning rate is 0.01, and after 300 times of iterative training, the loss value and the precision both tend to be stable, and the optimal model parameters at the moment are stored.
The model test adopts the following methods of Precision (Precision), Recall (Recall) and Average Precision (AP) in the evaluation indexes:
Figure BDA0003581315740000031
the calculation of Precision and Recall requires that the detection result be divided into four categories, TP (the number of positive classes predicted as positive classes), TN (the number of positive classes predicted as negative classes), FP (the number of negative classes predicted as positive classes), and FN (the number of negative classes predicted as negative classes) according to the true label.
3. The invention has the beneficial effects that: according to the invention, through improving the YOLOv5 network, compared with the original YOLOv5 algorithm, the detection precision is improved, the identification of the shielded part is improved, and a good effect is obtained.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings required to be used in the embodiments will be briefly described below.
FIG. 1 is a flowchart of the overall algorithm of the present invention;
FIG. 2 is a network structure diagram of the invention incorporating a transform and a Dense Block;
FIG. 3 is a diagram of a modified CSP1_3 structure;
FIG. 4 is a diagram showing the structure of a transducer;
Detailed Description
The following describes in further detail an improved vehicle detection and identification method based on YOLOv5 according to the present invention with reference to the accompanying drawings and the detailed description.
The first embodiment is as follows:
the embodiment provides an improved vehicle detection and identification method based on YOLOv5, and with reference to fig. 1, the implementation steps include:
the method comprises the following steps: the vehicle detection data set BDD100k is acquired and the data set is preprocessed.
Step two: based on the idea of Dense connection of the Dense Net network, a Dense Block module is replaced by a residual network structure Resunit in two CSP1_3 modules in a backbone network.
Step three: and adding a Transformer network after the SPP module of the backbone network.
Step four: and training and testing the improved network.
The second embodiment is as follows:
different from the first specific embodiment, in the second step of the method for detecting and identifying a vehicle based on YOLOv5 improvement of the present embodiment, with reference to fig. 2 and 3, a specific method for replacing a residual network structure reset in two CSP1_3 modules in a backbone network with a density Block module is as follows:
the Dense Block module includes three convolution layers, each convolution layer including (BN + ReLU + 1. times.1 Conv) + (BN + ReLU + 3. times.3 Conv). And setting the growth rate k in the Dense Block to be 32, ensuring that only 32 channels are output after each layer is convolved by 3 x 3, splicing the channels with the input of the front layer, and taking the spliced channels as the input of the next layer.
The 3-channel 640 pictures are input, output after Focus slicing operation (3, 32, 320, 320), post connection Conv output (32, 64, 160, 160), connection layer bottleeccsp 1-1 output (64, 64, 160, 160), connection Conv layer output (64, 128, 80, 80), connection DenseBlockCSP1_3 layer output (128, 128, 80, 80), connection layer Conv output (1128, 256, 40, 40), connection DenseBlockCSP1_3 output (256, 256, 40, 40), connection layer Conv output (256, 512, 20, 20), connection layer SPP output (512, 512, 20, 20).
The third concrete implementation mode:
different from the first and second embodiments, in the third embodiment, with reference to fig. 4, a method for detecting and identifying a vehicle based on YOLOv5 improvement includes the following steps:
and outputting (512, 512, 20, 20) through the SPP of the connection layer. Then connecting a Transformer layer, wherein data firstly passes through a Multi-Head association module which comprises 8 self-orientations to obtain 8 weighted feature matrixes ZiI ∈ {1, 2.., 8 }. Will be 8ZiA large feature matrix Z is assembled in columns. After being output by the Multi-Head attribute, the Multi-Head attribute is output by a full-connected layer (Feed Forward Neural Network), the full-connected layer is connected with two layers, the activation function of the first layer is ReLU, the second layer is a linear activation function, and finally the Multi-Head attribute is output by a transform layer (512, 512, 20, 20). And finally, performing feature fusion on the output of the backbone network through the hack part.
The original NMS is replaced by a DIoU _ NMS at the output. The IoU index is used in the NMS to suppress redundant detection frames, and only the overlapping area is considered, so that when a real frame overlaps with a candidate frame, the detection frame is erroneously suppressed. And the DIoU considers the overlapping area and the central point distance at the same time, and the accuracy of some sheltered vehicle targets can be improved.
The method comprises the following four steps: setting the iteration time epoch to be 300, the batch-size to be 4, the initial learning rate to be 0.01, and the DIoU _ NMS threshold to be 0.45, after 300 times of iterative training, the loss value and the precision both tend to be stable, and storing the optimal model parameters at the moment.
The model test adopts the following methods of Precision (Precision), Recall (Recall) and Average Precision (AP) in the evaluation indexes:
Figure BDA0003581315740000051
the calculation of Precision and Recall requires that the detection result be divided into four categories, TP (the number of positive classes predicted as positive classes), TN (the number of positive classes predicted as negative classes), FP (the number of negative classes predicted as positive classes), and FN (the number of negative classes predicted as negative classes) according to the true label.
The situation that the image of the applicable target vehicle is smaller is improved, and the detection precision is improved on the premise of ensuring the speed.
The modules in the embodiments may be used alone or in combination as described above. The above embodiments are merely preferred embodiments of the present invention, and are not to be construed as limiting the present invention. Any modifications and substitutions by one of ordinary skill in the art to the present invention shall be included within the scope of the present invention.

Claims (4)

1. An improved vehicle detection algorithm based on a YOLOv5 network is characterized in that: the method is mainly realized by the following steps:
(1) acquiring a vehicle detection data set BDD100k, and preprocessing the data set;
(1-1) the data set comprises 10000 pictures; due to the specific application scene of the invention, the categories of Light, Sign, Person, Bike and Rider in the BDD100k data set are ignored; converting the json format of the label corresponding to each picture into an xml file through a python script; the xml file comprises a picture name, a picture path, a target label name and a target position coordinate;
(1-2) uniformly classifying Bus, Truck, Motor, Car and Train into a Car class; storing the pictures and the tag xml files after the unified category according to a VOC format;
creating two folders of Annotations and JPEGImages under the VOC folder for storing pictures and labels; storing each label xml file in an options folder, and storing all pictures in a JPEGImages folder;
(1-3) using python script, randomly arranging the created VOC-formatted data set according to 8: 2, dividing the data into a training set and a test set, and converting the label format of the data set into a txt format suitable for yolo;
(2) based on the concept of Dense connection of a DenseNet network, replacing residual network structures of two CSP1_3 modules in a backbone network with a Dense Block module;
the Dense Block module comprises three convolution layers, wherein each convolution layer comprises (BN + ReLU + 1Conv) + (BN + ReLU +3 Conv); setting the growth rate k in the Dense Block to be 32, ensuring that only 32 channels are output after each layer is convoluted by 3 x 3, splicing the channels with the input of the front layer, and taking the spliced channels as the input of the next layer;
(3) adding a Transformer network behind a backbone network SPP module;
in a Transformer network, data first passes through a Multi-Head attribute module, wherein the module comprises 8 Self-attributes, and 8 weighted feature matrices are obtained; splicing 8 characters into a large characteristic matrix according to columns; after being output by Multi-Head attribute, the Multi-Head attribute passes through a full-connected layer (Feed Forward Neural Network), the full-connected layer is provided with two layers, the activation function of the first layer is ReLU, and the second layer is a linear activation function; finally outputting a characteristic diagram through a Transformer layer to be (512, 512, 20, 20); finally, the output of the backbone network is fused with the multi-scale characteristics through the tack part;
(4) and training and testing the improved network.
2. The improved vehicle detection algorithm based on the YOLOv5 network in claim 1, wherein: the 3-channel 640 picture is input, output after Focus slicing operation (3, 32, 320, 320), post-connection Conv output (32, 64, 160, 160), connection layer BottleneccCSP 1-1 output (64, 64, 160, 160), connection Conv layer output (64, 128, 80, 80), connection DenseBlockCSP1_3 layer output (128, 128, 80, 80), connection layer Conv output (1128, 256, 40, 40), connection layer Cov CSP1_3 output (256, 256, 40, 40), connection layer Conv output (256, 512, 20, 20), connection layer SPP output (512, 512, 20, 20).
3. The improved vehicle detection algorithm based on the YOLOv5 network as claimed in claim 1, wherein: in the step (3), outputting (512, 512, 20, 20) through the SPP of the front layer; then connecting a Transformer layer, wherein data firstly passes through a Multi-Head association module which comprises 8 self-orientations to obtain 8 weighted feature matrixes; 8 pieces of the characters are spliced into a large characteristic matrix according to columns; after being output by the Multi-Head attribute, the Multi-Head attribute is output by a full connected layer (Feed Forward Neural Network), the full connected layer is connected with two layers, the activation function of the first layer is ReLU, the second layer is a linear activation function, and finally the Multi-Head attribute is output by a transform layer (512, 512, 20, 20); and finally, performing multi-scale feature fusion on the output of the backbone network through the tack part.
4. The improved vehicle detection algorithm based on the YOLOv5 network as claimed in claim 1, wherein: in the step (4), the iteration time epoch is set to be 300, the batch-size is set to be 4, the initial learning rate is 0.01, the loss value and the Average Precision (AP) tend to be stable after 300 times of iterative training, and the optimal model parameters at the moment are stored;
the model test adopts the following model evaluation indexes of Precision (Precision), Recall (Recall) and Average Precision (AP):
Figure 931560DEST_PATH_IMAGE001
Figure 505761DEST_PATH_IMAGE002
Figure 720842DEST_PATH_IMAGE003
wherein the content of the first and second substances,
Figure 13283DEST_PATH_IMAGE004
to predict the positive class as a number of positive classes,
Figure 103336DEST_PATH_IMAGE005
to predict the positive class as a number of negative classes,
Figure 797623DEST_PATH_IMAGE006
to predict a negative class as a number of positive classes,
Figure 183605DEST_PATH_IMAGE007
to predict the negative class as the number of the negative class, Precision is the ratio of the number of correctly detected samples to the total number of samples, Recall is the ratio of the number of correctly detected samples to the number of true samples, and the Average Precision (AP) is the highest average value under different conditions.
CN202210352291.5A 2022-04-05 2022-04-05 Improved vehicle detection and identification method based on YOLOv5 Pending CN114723992A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210352291.5A CN114723992A (en) 2022-04-05 2022-04-05 Improved vehicle detection and identification method based on YOLOv5

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210352291.5A CN114723992A (en) 2022-04-05 2022-04-05 Improved vehicle detection and identification method based on YOLOv5

Publications (1)

Publication Number Publication Date
CN114723992A true CN114723992A (en) 2022-07-08

Family

ID=82242617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210352291.5A Pending CN114723992A (en) 2022-04-05 2022-04-05 Improved vehicle detection and identification method based on YOLOv5

Country Status (1)

Country Link
CN (1) CN114723992A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116704487A (en) * 2023-06-12 2023-09-05 三峡大学 License plate detection and recognition method based on Yolov5s network and CRNN
WO2024013588A1 (en) * 2022-07-13 2024-01-18 Samsung Electronics Co., Ltd. System and method for using residual transformers in natural language processing

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024013588A1 (en) * 2022-07-13 2024-01-18 Samsung Electronics Co., Ltd. System and method for using residual transformers in natural language processing
CN116704487A (en) * 2023-06-12 2023-09-05 三峡大学 License plate detection and recognition method based on Yolov5s network and CRNN
CN116704487B (en) * 2023-06-12 2024-06-11 三峡大学 License plate detection and identification method based on Yolov s network and CRNN

Similar Documents

Publication Publication Date Title
WO2022083784A1 (en) Road detection method based on internet of vehicles
CN109784150B (en) Video driver behavior identification method based on multitasking space-time convolutional neural network
Reddy et al. Roadtext-1k: Text detection & recognition dataset for driving videos
CN114723992A (en) Improved vehicle detection and identification method based on YOLOv5
CN113723377B (en) Traffic sign detection method based on LD-SSD network
CN111476210B (en) Image-based text recognition method, system, device and storage medium
Riaz et al. YOLO based recognition method for automatic license plate recognition
CN112990065A (en) Optimized YOLOv5 model-based vehicle classification detection method
Zhang et al. DetReco: Object‐Text Detection and Recognition Based on Deep Neural Network
Cao et al. An end-to-end neural network for multi-line license plate recognition
CN117975418A (en) Traffic sign detection method based on improved RT-DETR
Mukhopadhyay et al. A hybrid lane detection model for wild road conditions
CN113269038A (en) Multi-scale-based pedestrian detection method
CN115953744A (en) Vehicle identification tracking method based on deep learning
CN110555425A (en) Video stream real-time pedestrian detection method
Visaria et al. Tsrsy-traffic sign recognition system using deep learning
CN116363072A (en) Light aerial image detection method and system
CN113392812A (en) Road lane line detection method and system based on deep neural network
CN113239931A (en) Logistics station license plate recognition method
Al Khafaji et al. Traffic Signs Detection and Recognition Using A combination of YOLO and CNN
Caballero et al. Detection of traffic panels in night scenes using cascade object detector
Biswas et al. YOLOv8 based Traffic Signal Detection in Indian Road
Padalia Detection and Number Plate Recognition of Non-Helmeted Motorcyclists using YOLO
Kavitha et al. Traffic Sign Recognition and Voice-Activated Driving Assistance Using Raspberry Pi
CN113221643B (en) Lane line classification method and system adopting cascade network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination