CN111126331A - Real-time guideboard detection method combining object detection and object tracking - Google Patents

Real-time guideboard detection method combining object detection and object tracking Download PDF

Info

Publication number
CN111126331A
CN111126331A CN201911405434.9A CN201911405434A CN111126331A CN 111126331 A CN111126331 A CN 111126331A CN 201911405434 A CN201911405434 A CN 201911405434A CN 111126331 A CN111126331 A CN 111126331A
Authority
CN
China
Prior art keywords
guideboard
detection
network
coordinates
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911405434.9A
Other languages
Chinese (zh)
Inventor
苏宏业
马龙华
陆哲明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Zhongchuang Tiancheng Technology Co ltd
Original Assignee
Zhejiang Zhongchuang Tiancheng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Zhongchuang Tiancheng Technology Co ltd filed Critical Zhejiang Zhongchuang Tiancheng Technology Co ltd
Priority to CN201911405434.9A priority Critical patent/CN111126331A/en
Publication of CN111126331A publication Critical patent/CN111126331A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a real-time guideboard detection method combining object detection and object tracking, which is characterized in that a guideboard detection network is constructed for positioning the coordinate position of a guideboard in a picture, and an initial frame is extracted from a to-be-detected video containing the guideboard and input into a trained guideboard detection network; and then obtaining the guideboard detection coordinates of the initial frame by a non-maximum suppression (NMS) method and a confidence threshold method. Inputting the guideboard detection coordinates into a KCF algorithm to obtain the guideboard tracking coordinates of the next frame in the video to be detected, continuously tracking the video to be detected by a fixed frame number according to the requirement, taking the last frame continuously tracked as a new initial frame, and repeating the steps to obtain whether each frame in the video to be detected contains the guideboard and the coordinates of the guideboard. The method combines the accuracy of the detection method and simultaneously utilizes the rapidity of the tracking algorithm.

Description

Real-time guideboard detection method combining object detection and object tracking
Technical Field
The invention belongs to the field of machine vision application, and particularly relates to a real-time guideboard detection method combining object detection and object tracking.
Background
The research of the guideboard detection system starts from the beginning of the last century, and the guideboard detection system also becomes a popular research direction with the attention of intelligent driving and automatic driving. The guideboard detection method based on machine learning has the advantages of low cost, high accuracy, strong real-time performance and easiness in implementation, and is a necessary topic in the field of unmanned driving. Due to the complexity of traffic environment, the variability of illumination weather and the difference of traffic signs in various countries and regions, the current guideboard detection is still in a preliminary field application stage, the types of detected signs are limited, and the detection precision and efficiency have a great space for improvement, so high-precision and low-delay guideboard detection is still to be developed.
The current guideboard detection mainly has two main modes: one is a double-step detection mode, and the other is a single-step detection mode. The double-step detection means that the candidate area containing the guideboard is roughly detected, and then the accurate coordinate of the guideboard is accurately positioned, and the algorithm mainly comprises a FasterRCNN algorithm and the like. The single-step detection refers to a method of inputting a picture into a neural network to directly obtain coordinates, such as an SSD algorithm and a YOLO algorithm. The former network has complex and large structure and slower calculation speed, but has the advantages of high precision, and the latter network is small and fast, but the precision is far lower than that of the former network, and the design requirement on a neural network is higher.
Disclosure of Invention
The invention aims to provide a real-time guideboard detection method combining the advantages of object detection and object tracking aiming at the technical defects in the existing guideboard detection.
The purpose of the invention is realized by the following technical scheme: a real-time guideboard detection method combining object detection and object tracking comprises the following steps:
(1) collecting a video of an automobile in the advancing process; extracting video frames as a training picture set according to a fixed frame number interval, marking the coordinates of the guideboard in the picture by using a rectangular frame for the training picture containing the guideboard, and storing the coordinates in an xml file; all the xml files and the training pictures are used as a training set;
(2) and (3) constructing a guideboard detection network for positioning the coordinate position of the guideboard in the picture, inputting the training picture set in the step (1), and outputting the coordinates and the confidence coefficient of the guideboard predicted in each picture.
The basic network of the guideboard detection network is a network with a fully-connected layer removed by an inclusion V4 model pre-trained on an ImageNet data set, a region candidate network (RPN) is used behind the basic network to obtain a feature map containing guideboard candidate region coordinates, the feature map is subjected to region-of-interest pooling (ROIPooling) operation to obtain feature maps of candidate regions with uniform sizes, and each feature map is input into a positioning and classifying network to obtain accurate coordinates and confidence degrees of the candidate regions in an original image.
(3) Inputting the training set obtained in the step (1) into the constructed guideboard detection network for training, and storing guideboard detection network parameters to obtain the trained guideboard detection network;
(4) and (3) extracting an initial frame from the video to be detected containing the guideboard, inputting the initial frame into the guideboard detection network trained in the step (3), and obtaining the guideboard detection coordinate of the initial frame by a non-maximum suppression (NMS) and confidence threshold method.
(5) Inputting the guideboard detection coordinates output in the step (4) into a KCF algorithm to obtain guideboard tracking coordinates of the next frame in the video to be detected, and continuously tracking the video to be detected by a fixed frame number according to the requirement;
(6) and (4) taking the last frame continuously tracked in the step (5) as a new initial frame, and repeating the steps (4) and (5) to obtain whether each frame in the video to be detected contains the guideboard and the coordinates of the guideboard.
Further, in the step (1), labelImg software is adopted for coordinate labeling of the guideboard in the training picture.
Further, in the step (2), the positioning and classifying network is a 5-layer network, and the two networks share a four-layer network, which is a full connection layer with 1024 nodes, a relu activation layer, a full connection layer with 1024 nodes, and a relu activation layer; the fifth layer network of the classification network is a full connection layer with the node number of 2, and then output results are converted into a probability form through softmax operation; the fifth layer of the positioning network is a fully connected layer with the number of nodes being 8.
Further, in the step (2), the constructed guideboard detection network is pre-trained on the COCO data set, and the pre-trained network parameter result is taken as the initial parameters of the guideboard detection network.
Further, in the step (3), the training process of the guideboard detection network specifically includes: and (3) calculating a position loss function according to the guideboard detection coordinates obtained in the step (2) and the guideboard coordinates marked in the step (1), and calculating a classification loss function according to the guideboard confidence obtained in the step (2). And adding the two loss functions, deriving the total loss function, and updating the numerical value of each node in the network by a random gradient descent method with Momentum correction (Momentum) added to obtain the guideboard detection network parameter.
Further, in the step (3), random image augmentation is performed on the trained image, and the augmentation operation includes: random horizontal flipping, random picture scaling, random cutting, random rotation, and the like.
Further, in the step (5), in the tracking stage, the tracked coordinates are judged every fixed frame, the coordinates of the guideboard are tracked only when the same guideboard is detected in more than a fixed number of continuous frames, and the number of the tracked continuous frames is set according to requirements.
Further, in the step (6), the last frame of continuous tracking is used as a new initial frame, the new guideboard detection coordinate is obtained in the step (4), the new guideboard detection coordinate and the guideboard tracking coordinate of the last frame are integrated through the NMS algorithm to obtain the guideboard coordinate in the current frame, and the guideboard coordinate is input to the KCF algorithm in the step (5).
The invention has the beneficial effects that: the invention effectively combines the object detection algorithm and the object tracking algorithm, and builds the real-time guideboard detection network. In addition, in the process of training the guideboard detection model, the adjustment of the network weight is guided by a random gradient descent method adopting momentum correction. The method deletes effective guideboard tracking coordinates in a tracking module, and obtains finally predicted guideboard coordinates by adopting an NMS algorithm when integrating guideboard detection coordinates and guideboard tracking coordinates.
Drawings
FIG. 1 is a flow chart of guideboard detection convolutional neural network training;
FIG. 2 is a flow chart of a real-time detection algorithm;
fig. 3 is a schematic diagram of a guideboard label.
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples.
The invention realizes high-precision real-time guideboard detection by building and training a guideboard detection convolutional neural network model and combining the guideboard detection convolutional neural network model with a guideboard tracking algorithm, and is characterized by building and training an efficient detection network architecture and integrating the coordinates of the detection and tracking guideboards. First, real training data is generated for training of the detection neural network. Secondly, a reasonable classification and positioning neural network is needed to be designed for road sign prediction, and the adjustment of the network weight is guided by a loss function and a momentum-corrected random gradient descent method. And demonstrates the overall accuracy and real-time of the model with examples. As shown in fig. 1, the main steps of the technical scheme adopted by the invention are as follows:
(1) and acquiring a video of the automobile in the advancing process through the camera. And extracting video frames with a constant frame number to serve as a training picture set, marking the coordinates of the guideboard in the training picture by using a rectangular frame through labelImg software, and storing the coordinates in an xml file. And taking the xml file and the training picture as a training set.
(2) And (3) building a guideboard detection depth network to position the coordinate position of the guideboard in the picture, inputting the training image set in the step (1), and outputting the coordinates and the confidence coefficient of the guideboard in each picture.
Guideboard detection network architecture: the basic network of the detection network is a network except a full connection layer of an inclusion V4 model pre-trained on an ImageNet data set, then a region candidate network (RPN) is utilized to obtain candidate region coordinates which are guideboards in an input picture, a feature map of the candidate region is subjected to region of interest pooling (ROIPooling) operation to obtain feature maps of candidate regions with uniform sizes, and all the feature maps are input into a positioning and classifying network to obtain accurate coordinates and confidence degrees of the candidate regions in an original image. The positioning and classifying network is a 5-layer network, the positioning and classifying network and the classifying network share four-layer networks, namely a full connection layer with 1024 nodes, a relu activation layer, a full connection layer with 1024 nodes and a relu activation layer. The fifth layer of the classified network is a fully connected layer with the node number of 2, and then output results are converted into a probability form through softmax operation. The fifth layer of the positioning network is a fully connected layer with the number of nodes being 8.
(3) The method is a training stage of the guideboard detection network, firstly, the constructed guideboard detection network is pre-trained on a COCO data set, and a pre-trained network parameter result is taken as an initial parameter of the guideboard detection network. And carrying out random image augmentation on the trained image, wherein the augmentation operation comprises the following steps: random horizontal flipping, random picture scaling, random cutting, random rotation, and the like. And then inputting the road sign into a road sign detection network, and training to obtain road sign detection network parameters. And (3) calculating a position loss function according to the guideboard detection coordinates obtained in the step (2) and the guideboard coordinates marked in the step (1), and calculating a classification loss function according to the guideboard confidence obtained in the step (2). And adding the two loss functions, after the total loss function is derived, updating the numerical value of each node in the network by a random gradient descent method of adding Momentum correction (Momentum), stopping updating the network weight when the loss function tends to be stable or reaches a set step number, and storing the current network weight. This experiment sets up training step number and is 200000 steps, and when 200000 steps, the change of loss function is no longer than 1%, tends to stable.
(4) And (3) extracting an initial frame from the video to be detected containing the guideboard, inputting the initial frame into the guideboard detection network trained in the step (3), and obtaining the guideboard detection coordinate of the initial frame by a non-maximum suppression (NMS) and confidence threshold method. The confidence threshold method is that the coordinates of the guideboard are adopted only when the confidence of the detected guideboard is greater than a certain threshold, otherwise, the confidence is discarded. Experiments show that when the threshold value is set to be 0.5, the network detection effect is optimal.
(5) Inputting the detection guideboard detection coordinates output in the step (4) into a KCF algorithm to obtain guideboard tracking coordinates of the next frame in the video to be detected, and continuously tracking the video to be detected by a fixed frame number according to the requirement; the accuracy and the speed of the method are optimal when the continuous tracking fixed frame number is 5 frames; in the tracking stage, the tracked coordinates are judged every fixed frame, and the guideboard coordinates are tracked only when the same guideboard is detected in more than a fixed number of continuous frames. It has been found through experimentation that the tracking algorithm works best when a fixed number of consecutive frames is set to 2.
(6) And (4) taking the last frame continuously tracked in the step (5) as a new initial frame, obtaining a new guideboard detection coordinate in the step (4), integrating the new guideboard detection coordinate and the guideboard tracking coordinate of the last frame through an NMS (network management system) algorithm to obtain the guideboard coordinate in the current frame, inputting the guideboard coordinate into the KCF algorithm in the step (5), and obtaining whether each frame in the video to be detected contains a guideboard and the coordinate of the guideboard.
Example (b):
mean Average Precision (mAP) of the PASCAL VOC game and the number of Frames Per Second (FPS) that the algorithm can handle are taken as the criteria for evaluating the algorithm.
Due to the lack of a public guideboard detection data set, the data set adopted by the data of the embodiment is taken from the following four aspects; dataset one, hundred degrees apollospace dataset. Data set two, the TT100K data set in qinghua tengchun. And thirdly, downloading pictures containing the guideboards from the hundred-degree pictures. And a data set IV, wherein the specific pictures and the number of the guideboards are extracted from the shot road condition video as shown in the table below.
apollospace data set TT100K data set Baidu picture Real shooting road condition video picture Total value
Number of pictures containing guideboard 3370 1945 968 120 6403
Number of guideboards 4452 4629 2236 175 11492
Number of pictures 4022 4449 968 408 9847
The training stage adopts the data of the first three data sets, and the testing stage adopts the last real-shot road condition video as a testing object. The experimental result of the model reached an mAP of 0.75, which was 0.48 of the mAP of the comparative single-step detection algorithm SSD. The detection rate of the model reaches 32FPS, and the real-time requirement is met.
The above-described embodiments are intended to illustrate rather than to limit the invention, and any modifications and variations of the present invention are within the spirit of the invention and the scope of the appended claims.

Claims (8)

1. A real-time guideboard detection method combining object detection and object tracking is characterized by comprising the following steps:
(1) collecting a video of an automobile in the advancing process; extracting video frames as a training picture set according to a fixed frame number interval, marking the coordinates of the guideboard in the picture by using a rectangular frame for the training picture containing the guideboard, and storing the coordinates in an xml file; all the xml files and the training pictures are used as a training set;
(2) and (3) constructing a guideboard detection network for positioning the coordinate position of the guideboard in the picture, inputting the training picture set in the step (1), and outputting the coordinates and the confidence coefficient of the guideboard predicted in each picture.
The basic network of the guideboard detection network is a network with a fully-connected layer removed by an inclusion V4 model pre-trained on an ImageNet data set, a region candidate network (RPN) is used behind the basic network to obtain a feature map containing guideboard candidate region coordinates, the feature map is subjected to region-of-interest pooling (ROIPooling) operation to obtain feature maps of candidate regions with uniform sizes, and each feature map is input into a positioning and classifying network to obtain accurate coordinates and confidence degrees of the candidate regions in an original image.
(3) Inputting the training set obtained in the step (1) into the constructed guideboard detection network for training, and storing guideboard detection network parameters to obtain the trained guideboard detection network;
(4) and (3) extracting an initial frame from the video to be detected containing the guideboard, inputting the initial frame into the guideboard detection network trained in the step (3), and obtaining the guideboard detection coordinate of the initial frame by a non-maximum suppression (NMS) and confidence threshold method.
(5) Inputting the guideboard detection coordinates output in the step (4) into a KCF algorithm to obtain guideboard tracking coordinates of the next frame in the video to be detected, and continuously tracking the video to be detected by a fixed frame number according to the requirement;
(6) and (4) taking the last frame continuously tracked in the step (5) as a new initial frame, and repeating the steps (4) and (5) to obtain whether each frame in the video to be detected contains the guideboard and the coordinates of the guideboard.
2. The method for detecting the guideboard in real time by combining the object detection and the object tracking as claimed in claim 1, wherein in the step (1), labelImg software is used for marking the coordinates of the guideboard in the training picture.
3. The method for detecting the guideboard in real time by combining the object detection and the object tracking according to claim 1, wherein in the step (2), the positioning and classifying network is a 5-layer network, and the positioning and classifying network and the classifying network share four layers, namely a full-connection layer with 1024 nodes, a relu active layer, a full-connection layer with 1024 nodes and a relu active layer; the fifth layer network of the classification network is a full connection layer with the node number of 2, and then output results are converted into a probability form through softmax operation; the fifth layer of the positioning network is a fully connected layer with the number of nodes being 8.
4. The method according to claim 1, wherein in the step (2), the constructed road sign detection network is pre-trained on the COCO data set, and the pre-trained network parameter result is taken as the initial parameter of the road sign detection network.
5. The real-time guideboard detection method combining object detection and object tracking according to claim 1, wherein in step (3), the training process of the guideboard detection network specifically comprises: and (3) calculating a position loss function according to the guideboard detection coordinates obtained in the step (2) and the guideboard coordinates marked in the step (1), and calculating a classification loss function according to the guideboard confidence obtained in the step (2). And adding the two loss functions, deriving the total loss function, and updating the numerical value of each node in the network by a random gradient descent method with Momentum correction (Momentum) added to obtain the guideboard detection network parameter.
6. The method for real-time guideboard detection combined with object detection and object tracking according to claim 1, wherein in step (3), the training image is subjected to random image augmentation, and the augmentation operation includes: random horizontal flipping, random picture scaling, random cutting, random rotation, and the like.
7. The method as claimed in claim 1, wherein in the step (5), the tracked coordinates are determined at regular intervals during the tracking stage, and the coordinates of the guideboard are tracked only when the same guideboard is detected in more than a fixed number of consecutive frames, and the number of the tracked consecutive frames is set according to the requirement.
8. The real-time guideboard detection method combining object detection and object tracking according to claim 1, characterized in that in step (6), the last frame of continuous tracking is used as a new initial frame, new guideboard detection coordinates are obtained in step (4), the new guideboard detection coordinates and the guideboard tracking coordinates of the last frame are integrated by NMS algorithm to obtain the guideboard coordinates in the current frame, and the guideboard coordinates are input to KCF algorithm in step (5).
CN201911405434.9A 2019-12-30 2019-12-30 Real-time guideboard detection method combining object detection and object tracking Pending CN111126331A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911405434.9A CN111126331A (en) 2019-12-30 2019-12-30 Real-time guideboard detection method combining object detection and object tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911405434.9A CN111126331A (en) 2019-12-30 2019-12-30 Real-time guideboard detection method combining object detection and object tracking

Publications (1)

Publication Number Publication Date
CN111126331A true CN111126331A (en) 2020-05-08

Family

ID=70506029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911405434.9A Pending CN111126331A (en) 2019-12-30 2019-12-30 Real-time guideboard detection method combining object detection and object tracking

Country Status (1)

Country Link
CN (1) CN111126331A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930877A (en) * 2020-09-18 2020-11-13 蘑菇车联信息科技有限公司 Map guideboard generation method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170091950A1 (en) * 2015-09-30 2017-03-30 Fotonation Limited Method and system for tracking an object
CN107316001A (en) * 2017-05-31 2017-11-03 天津大学 Small and intensive method for traffic sign detection in a kind of automatic Pilot scene
CN108320297A (en) * 2018-03-09 2018-07-24 湖北工业大学 A kind of video object method for real time tracking and system
CN110490073A (en) * 2019-07-15 2019-11-22 浙江省北大信息技术高等研究院 Object detection method, device, equipment and storage medium
CN110619279A (en) * 2019-08-22 2019-12-27 天津大学 Road traffic sign instance segmentation method based on tracking

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170091950A1 (en) * 2015-09-30 2017-03-30 Fotonation Limited Method and system for tracking an object
CN107316001A (en) * 2017-05-31 2017-11-03 天津大学 Small and intensive method for traffic sign detection in a kind of automatic Pilot scene
CN108320297A (en) * 2018-03-09 2018-07-24 湖北工业大学 A kind of video object method for real time tracking and system
CN110490073A (en) * 2019-07-15 2019-11-22 浙江省北大信息技术高等研究院 Object detection method, device, equipment and storage medium
CN110619279A (en) * 2019-08-22 2019-12-27 天津大学 Road traffic sign instance segmentation method based on tracking

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BIAO HOU. ET AL: "Object detection and tracking based on convolutional neural networks for high-resolution optical remote sensing video" *
唐聪等: "基于深度学习物体检测的视觉跟踪方法" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930877A (en) * 2020-09-18 2020-11-13 蘑菇车联信息科技有限公司 Map guideboard generation method and electronic equipment
CN111930877B (en) * 2020-09-18 2021-01-05 蘑菇车联信息科技有限公司 Map guideboard generation method and electronic equipment

Similar Documents

Publication Publication Date Title
CN108846826B (en) Object detection method, object detection device, image processing apparatus, and storage medium
CN108334881B (en) License plate recognition method based on deep learning
CN106971185B (en) License plate positioning method and device based on full convolution network
CN111967313B (en) Unmanned aerial vehicle image annotation method assisted by deep learning target detection algorithm
CN109767619B (en) Intelligent networking pure electric vehicle running condition prediction method
CN110348332B (en) Method for extracting multi-target real-time trajectories of non-human machines in traffic video scene
CN112232351B (en) License plate recognition system based on deep neural network
CN111008979A (en) Robust night image semantic segmentation method
CN113516664A (en) Visual SLAM method based on semantic segmentation dynamic points
CN111598175B (en) Detector training optimization method based on online difficult case mining mode
CN111582029A (en) Traffic sign identification method based on dense connection and attention mechanism
CN113792606A (en) Low-cost self-supervision pedestrian re-identification model construction method based on multi-target tracking
CN103605960B (en) A kind of method for identifying traffic status merged based on different focal video image
CN107247967B (en) Vehicle window annual inspection mark detection method based on R-CNN
CN111126331A (en) Real-time guideboard detection method combining object detection and object tracking
CN111354016A (en) Unmanned aerial vehicle ship tracking method and system based on deep learning and difference value hashing
CN104376538A (en) Image sparse denoising method
CN112907972A (en) Road vehicle flow detection method and system based on unmanned aerial vehicle and computer readable storage medium
CN111325811B (en) Lane line data processing method and processing device
CN111380529B (en) Mobile device positioning method, device and system and mobile device
CN112493228B (en) Laser bird repelling method and system based on three-dimensional information estimation
CN115761177A (en) Meta-universe-oriented three-dimensional reconstruction method for cross-border financial places
CN115482282A (en) Dynamic SLAM method with multi-target tracking capability in automatic driving scene
CN115512263A (en) Dynamic visual monitoring method and device for falling object
CN114445332A (en) Multi-scale detection method based on FASTER-RCNN model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination