CN116778224A - Vehicle tracking method based on video stream deep learning - Google Patents

Vehicle tracking method based on video stream deep learning Download PDF

Info

Publication number
CN116778224A
CN116778224A CN202310521333.8A CN202310521333A CN116778224A CN 116778224 A CN116778224 A CN 116778224A CN 202310521333 A CN202310521333 A CN 202310521333A CN 116778224 A CN116778224 A CN 116778224A
Authority
CN
China
Prior art keywords
vehicle
frame
target
video stream
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310521333.8A
Other languages
Chinese (zh)
Inventor
成卫锋
魏巨平
严绮文
许东荣
林毅安
储玮
彭魁
潘文欣
杨智钧
巢永辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou South China Road And Bridge Industry Co ltd
Original Assignee
Guangzhou South China Road And Bridge Industry Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou South China Road And Bridge Industry Co ltd filed Critical Guangzhou South China Road And Bridge Industry Co ltd
Priority to CN202310521333.8A priority Critical patent/CN116778224A/en
Publication of CN116778224A publication Critical patent/CN116778224A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of vehicle management, and discloses a vehicle tracking method based on video stream deep learning, which comprises the following steps: the first step: acquiring road video information; and a second step of: detecting the target of each frame in the video stream by utilizing a YOLOv7 algorithm according to the video data; and a third step of: assigning a corresponding ID to each target according to the vehicle targets obtained in the second step, and tracking the targets frame by frame; fourth step: the video stream data which is processed in the third step and marked with the ID and the tracking frame is output, and the vehicle tracking method based on the video stream deep learning does not limit the video data too much, can fully utilize the monitoring cameras which are widely distributed near the traffic road, and has wide data sources and large data quantity relative to the vehicle tracking by utilizing the satellite positioning navigation system.

Description

Vehicle tracking method based on video stream deep learning
Technical Field
The invention relates to the field of vehicle management, in particular to a vehicle tracking method based on video stream deep learning.
Background
At present, a vehicle tracking method mainly depends on a satellite positioning navigation system, a satellite positioning chip is added in the vehicle system, and the broadcasting position of a GNSS satellite system and the distance between a terminal and the satellite positioning chip are utilized to calculate the position information of the terminal, so that the positioning tracking of the vehicle is realized.
However, the prior art has the following problems:
for a part of automobiles, the system does not contain a satellite positioning chip, and the part of automobiles cannot be tracked by a satellite positioning navigation system;
the satellite positioning navigation function is realized locally, and higher authority is required for acquiring the part of information;
during the transmission of the position information, data leakage may occur, and for this reason we propose a vehicle tracking method based on video stream deep learning.
Disclosure of Invention
(one) solving the technical problems
Aiming at the defects of the prior art, the invention provides a vehicle tracking method based on video stream deep learning, which solves the problems.
(II) technical scheme
In order to achieve the above purpose, the present invention provides the following technical solutions: a vehicle tracking method based on video stream deep learning comprises the following steps:
the first step: acquiring road video information and importing data information;
and a second step of: the lane recognition module is used for detecting by using an OpenCV built-in algorithm based on geometric features of lane lines and generating a cubic polynomial expression function of each lane line, namely judging the lane where the current detection target vehicle is located;
and a third step of: based on a vehicle detection module, detecting the target of each frame in the video stream by utilizing a YOLOv7 algorithm;
fourth step: the vehicle tracking module assigns corresponding ID to each target for the initial frame target detected by the target detection module based on the StrongSORT algorithm and tracks the targets frame by frame
Fifth step: calculating vehicle position information, lane change information and vehicle speed information based on the traffic flow tracking information statistics module;
sixth step: and outputting the information.
Preferably, the specific content of the second step is:
s1: acquiring a frame of image with fewer vehicles in a scene, detecting the edges of objects in the image by using a Canny edge detection algorithm, detecting straight lines in the image by using Hough transformation, adding limitation of the length of the straight lines in order to distinguish lane lines from other regular-shape objects, dividing the detected straight lines into different lane line lists according to different slopes of the straight lines, fitting endpoints in each lane line list by using a poly1d function to a cubic polynomial expression function, and finally outputting;
s2: the area, which is closer to the monitoring camera, in the scene is delimited, and the image mask is set and transmitted to a subsequent module, and the required area is delimited by adopting a manual drawing mode because the setting positions and angles of all cameras are different.
Preferably, the specific content of the third step is:
s1: setting the input picture to 640x640 size;
s2: inputting the same into a backhaul network;
s3: outputting three layers of feature maps with different sizes through a head layer network;
s4: outputting a prediction result through Rep and conv;
s5: and detecting the vehicle target in each frame in the video stream and transmitting the vehicle target to the vehicle tracking module.
Preferably, the specific content of the fourth step is:
s1: initializing the ID, namely initializing the track, of each detected target;
s2: appearance feature extraction was performed with BoT for each target, as this baseline approach employed ResNeSt50 as the backbone network;
s3: motion compensation is performed on the camera by using an enhanced correlation coefficient maximization method, and the track position is predicted by using an NSA Kalman filtering algorithm, wherein the application is performed in the filtering algorithmAdaptively calculating noise covariance, where R k For a preset constant measurement noise covariance c k A detection confidence score in state k;
s4: calculating a matching cost between the detection target and the track by combining the appearance characteristics and the action information, wherein the cost matrix C is defined as C=λA α +(1-λ)A m Wherein lambda is a weight coefficient, and is set0.98, and then matching the detected target with the track by adopting a global linear matching algorithm, namely a Hungary algorithm according to the cost matrix;
s5: after successful matching, the object is assigned with a corresponding ID, and the appearance state of the track is updated in an exponential moving average manner, specifically, for the appearance feature of the ith track of the t frameThere is->Wherein->For the appearance embedding vector of the current matching target, α is an update coefficient, and is set to 0.9, and in addition, relevant parameters of the NSA Kalman filter are updated to predict the next frame, and specific parameters include the center position x, y, the aspect ratio a of the calibration frame, and the calibration frame height h.
Preferably, the specific content of the fifth step is:
s1: calculating and determining the lane where each ID target is located by utilizing the relationship between the midpoint and the straight line in the two-dimensional space in combination with the lane line information and the vehicle information;
s2: when the vehicle changes lanes, recording the ID and the corresponding frame sequence;
s3: based on the mask image of the speed measuring area, the frames of each ID object entering the speed measuring area and leaving the speed measuring area are recorded, the two-frame time difference is obtained by dividing the difference by the frame rate, and the average speed of each ID object in the speed measuring area is calculated based on the length of the road passing through the speed measuring area.
(III) beneficial effects
Compared with the prior art, the invention provides a vehicle tracking method based on video stream deep learning, which has the following beneficial effects:
1. the vehicle tracking method based on video stream deep learning does not limit video data too much, can fully utilize monitoring cameras which are widely distributed near traffic roads, and has wide data sources and large data volume compared with the vehicle tracking method using a satellite positioning navigation system.
2. According to the vehicle tracking method based on video stream deep learning, the data processing part is not arranged at the terminal, so that the acquisition of the position information does not need permission of a vehicle user, the permission limit is relatively low, meanwhile, the data is relatively integrated, and the data leakage is difficult to occur.
Drawings
FIG. 1 is a schematic diagram of a system architecture of the present invention;
FIG. 2 is a flow chart of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-2, a vehicle tracking method based on video stream deep learning generally includes three major parts, namely a data input part, a data processing part and a data output part.
The data input part is used for acquiring road video through a camera at the roadside and preprocessing video information;
the data processing section includes a vehicle target detection module and a vehicle target tracking module, detects vehicle targets of the video area frame by frame using a vehicle detection algorithm, and then tracks each target using a target tracking algorithm.
The data output section outputs video stream data marked with an ID and a tracking frame.
The specific method of the scheme is as follows:
the first step: acquiring road video information and importing data information into a data processing module;
and a second step of: the data processing module is used for processing and analyzing the video data and specifically comprises a lane recognition module, a vehicle detection module, a vehicle tracking module and a traffic tracking information statistics module;
and a third step of: the lane recognition module detects by using an OpenCV built-in algorithm based on geometric features of lane lines, generates a cubic polynomial expression function of each lane line, and transmits the cubic polynomial expression function to a subsequent module for application, namely judging the lane where the current detection target vehicle is located.
S1: and acquiring a frame of image with fewer vehicles in the scene, and detecting the edges of objects in the image by using a Canny edge detection algorithm. Because the special geometric characteristics of the lane lines, namely the edges of the lane lines are straight lines, the straight lines in the image can be detected by using Hough transformation, and the limitation of the length of the straight lines can be added for distinguishing the lane lines from other objects with regular shapes. On the basis, because the video acquired by the camera is in perspective projection, namely the acquired two-dimensional scene has the effect of displaying the object as 'near-large-far-small', lane lines which are distributed in parallel in the real scene are not parallel in the image, so that the detected straight lines can be divided into different lane line lists according to different slopes of the straight lines, then endpoints in each lane line list are respectively fitted into a cubic polynomial expression function by utilizing a poly1d function, and finally output is carried out.
S2: dividing a speed measuring area: the lane recognition module is also responsible for demarcating a speed measuring area, and the module demarcates an area which is closer to the monitoring camera in a scene due to the requirement of the follow-up accuracy of vehicle speed measurement, and the area is transferred to the follow-up module by setting an image mask, and because the setting positions and angles of all cameras are different, the required area is demarcated by adopting a manual drawing mode.
Fourth step: based on a vehicle detection module, detecting a target (namely a vehicle) of each frame in the video stream by utilizing a YOLOv7 algorithm;
s1: setting the input picture to 640x640 size;
s2: inputting the same into a backhaul network;
s3: outputting three layers of feature maps with different sizes through a head layer network;
s4: outputting a prediction result through Rep and conv;
s5: and detecting the vehicle target in each frame in the video stream and transmitting the vehicle target to the vehicle tracking module.
Fifth step: utilizing a vehicle tracking module, assigning a corresponding ID to each target based on an initial frame target detected by a target detection module by a StrongSORT algorithm, and tracking the targets frame by frame;
s1: initializing the ID of each target (detection) detected by the target detection module, namely initializing a track;
s2: appearance feature extraction is performed by using BoT for each target, and because the baseline method adopts ResNeSt50 as a backbone network, more feature information can be extracted by the method;
s3: motion compensation of the camera using enhanced correlation coefficient maximization (ECC) method and prediction of the track position using NSA Kalman filtering algorithm, in which the application is madeAdaptively calculating noise covariance, where R k For a preset constant measurement noise covariance c k A detection confidence score in state k;
s4: calculating a matching cost between the detection target and the track by combining the appearance characteristics and the action information, wherein the cost matrix C is defined as C=λA α +(1-λ)A m Wherein lambda is a weight coefficient and is set to be 0.98, and then a global linear matching algorithm, namely a Hungary algorithm is adopted to match the detected target with the track according to the cost matrix;
s5: after successful matching, the object is assigned with a corresponding ID, and the appearance state of the track is updated in an Exponential Moving Average (EMA) mode, specifically, for the appearance feature of the ith track of the t-th frameThere is-> Wherein->For the appearance embedding vector of the current matching target, α is an update coefficient, and is set to 0.9, and in addition, relevant parameters of the NSA Kalman filter are updated to predict the next frame, and specific parameters include the center position x, y, the aspect ratio a of the calibration frame, and the calibration frame height h.
Sixth step: based on the traffic flow tracking information statistics module, calculating and outputting vehicle position information, lane change information and vehicle speed information;
s1: determining a lane where each ID target is located based on lane line information (a function fitted by lane lines) and vehicle information (ID, category and position) obtained by the three modules, and specifically calculating by using the relationship between the midpoint and the straight line in the two-dimensional space;
s2: when the vehicle changes lanes, recording the ID and the corresponding frame sequence;
s3: based on a mask image of the speed measuring area, recording frames of each ID target entering the speed measuring area and leaving the speed measuring area, dividing the difference by the frame rate to obtain two-frame time difference, and calculating the average speed of each ID target in the speed measuring area according to the length of a road passing through the speed measuring area;
although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (5)

1. The vehicle tracking method based on video stream deep learning is characterized by comprising the following steps of:
the first step: acquiring road video information and importing data information;
and a second step of: the lane recognition module is used for detecting by using an OpenCV built-in algorithm based on geometric features of lane lines and generating a cubic polynomial expression function of each lane line, namely judging the lane where the current detection target vehicle is located;
and a third step of: based on a vehicle detection module, detecting a vehicle target of each frame in the video stream by utilizing a YOLOv7 algorithm;
fourth step: the vehicle tracking module assigns corresponding ID to each target for the initial frame target detected by the vehicle detection module based on the StrongSORT algorithm and tracks the targets frame by frame
Fifth step: calculating vehicle position information, lane change information and vehicle speed information based on the traffic flow tracking information statistics module;
sixth step: and outputting the information.
2. The video stream deep learning-based vehicle tracking method of claim 1, wherein: the specific content of the second step is as follows:
s1: acquiring a frame of image with fewer vehicles in a scene, detecting the edges of objects in the image by using a Canny edge detection algorithm, detecting straight lines in the image by using Hough transformation, adding limitation of the length of the straight lines in order to distinguish lane lines from other regular-shape objects, dividing the detected straight lines into different lane line lists according to different slopes of the straight lines, fitting endpoints in each lane line list by using a poly1d function to a cubic polynomial expression function, and finally outputting;
s2: the area, which is closer to the monitoring camera, in the scene is delimited, and the image mask is set and transmitted to a subsequent module, and the required area is delimited by adopting a manual drawing mode because the setting positions and angles of all cameras are different.
3. The video stream deep learning-based vehicle tracking method of claim 1, wherein: the specific content of the third step is as follows:
s1: setting the input picture to 640x640 size;
s2: inputting the same into a backhaul network;
s3: outputting three layers of feature maps with different sizes through a head layer network;
s4: outputting a prediction result through Rep and conv;
s5: and detecting the vehicle target in each frame in the video stream and transmitting the vehicle target to the vehicle tracking module.
4. The video stream deep learning-based vehicle tracking method of claim 1, wherein: the specific content of the fourth step is as follows:
s1: initializing the ID, namely initializing the track, of each detected target;
s2: appearance feature extraction was performed with BoT for each target, as this baseline approach employed ResNeSt50 as the backbone network;
s3: motion compensation is performed on the camera by using an enhanced correlation coefficient maximization method, and the track position is predicted by using an NSA Kalman filtering algorithm, wherein the application is performed in the filtering algorithmAdaptively calculating noise covariance, where R k For a preset constant measurement noise covariance c k A detection confidence score in state k;
s4: calculating a matching cost between the detection target and the track by combining the appearance characteristics and the action information, wherein the cost matrix C is defined as C=λA α +(1-λ)A m Wherein lambda is a weight coefficient and is set to be 0.98, and then a global linear matching algorithm, namely a Hungary algorithm is adopted to match the detected target with the track according to the cost matrix;
s5: after successful matching, the corresponding ID is given to the target, and the appearance state of the track is updated in an exponential moving average modeSpecifically, for the appearance feature of the ith track of the ith frameThere is->Wherein the method comprises the steps ofFor the appearance embedding vector of the current matching target, α is an update coefficient, and is set to 0.9, and in addition, relevant parameters of the NSA Kalman filter are updated to predict the next frame, and specific parameters include the center position x, y, the aspect ratio a of the calibration frame, and the calibration frame height h.
5. The video stream deep learning-based vehicle tracking method of claim 1, wherein: the specific content of the fifth step is as follows:
s1: calculating and determining the lane where each ID target is located by utilizing the relationship between the midpoint and the straight line in the two-dimensional space in combination with the lane line information and the vehicle information;
s2: when the vehicle changes lanes, recording the ID and the corresponding frame sequence;
s3: based on the mask image of the speed measuring area, the frames of each ID object entering the speed measuring area and leaving the speed measuring area are recorded, the two-frame time difference is obtained by dividing the difference by the frame rate, and the average speed of each ID object in the speed measuring area is calculated based on the length of the road passing through the speed measuring area.
CN202310521333.8A 2023-05-09 2023-05-09 Vehicle tracking method based on video stream deep learning Pending CN116778224A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310521333.8A CN116778224A (en) 2023-05-09 2023-05-09 Vehicle tracking method based on video stream deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310521333.8A CN116778224A (en) 2023-05-09 2023-05-09 Vehicle tracking method based on video stream deep learning

Publications (1)

Publication Number Publication Date
CN116778224A true CN116778224A (en) 2023-09-19

Family

ID=88012339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310521333.8A Pending CN116778224A (en) 2023-05-09 2023-05-09 Vehicle tracking method based on video stream deep learning

Country Status (1)

Country Link
CN (1) CN116778224A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118247986A (en) * 2024-05-23 2024-06-25 清华大学 Vehicle cooperative control method for single signal intersection under mixed traffic flow

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136447A (en) * 2019-05-23 2019-08-16 杭州诚道科技股份有限公司 Lane change of driving a vehicle detects and method for distinguishing is known in illegal lane change
WO2020048027A1 (en) * 2018-09-06 2020-03-12 惠州市德赛西威汽车电子股份有限公司 Robust lane line detection method based on dynamic region of interest
CN111652033A (en) * 2019-12-12 2020-09-11 苏州奥易克斯汽车电子有限公司 Lane line detection method based on OpenCV
CN112101433A (en) * 2020-09-04 2020-12-18 东南大学 Automatic lane-dividing vehicle counting method based on YOLO V4 and DeepsORT
CN113077496A (en) * 2021-04-16 2021-07-06 中国科学技术大学 Real-time vehicle detection and tracking method and system based on lightweight YOLOv3 and medium
CN114463724A (en) * 2022-04-11 2022-05-10 南京慧筑信息技术研究院有限公司 Lane extraction and recognition method based on machine vision
CN115661683A (en) * 2022-07-01 2023-01-31 北京科技大学 Vehicle identification statistical method based on multi-attention machine system network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020048027A1 (en) * 2018-09-06 2020-03-12 惠州市德赛西威汽车电子股份有限公司 Robust lane line detection method based on dynamic region of interest
CN110136447A (en) * 2019-05-23 2019-08-16 杭州诚道科技股份有限公司 Lane change of driving a vehicle detects and method for distinguishing is known in illegal lane change
CN111652033A (en) * 2019-12-12 2020-09-11 苏州奥易克斯汽车电子有限公司 Lane line detection method based on OpenCV
CN112101433A (en) * 2020-09-04 2020-12-18 东南大学 Automatic lane-dividing vehicle counting method based on YOLO V4 and DeepsORT
CN113077496A (en) * 2021-04-16 2021-07-06 中国科学技术大学 Real-time vehicle detection and tracking method and system based on lightweight YOLOv3 and medium
CN114463724A (en) * 2022-04-11 2022-05-10 南京慧筑信息技术研究院有限公司 Lane extraction and recognition method based on machine vision
CN115661683A (en) * 2022-07-01 2023-01-31 北京科技大学 Vehicle identification statistical method based on multi-attention machine system network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118247986A (en) * 2024-05-23 2024-06-25 清华大学 Vehicle cooperative control method for single signal intersection under mixed traffic flow

Similar Documents

Publication Publication Date Title
US11604076B2 (en) Vision augmented navigation
Toulminet et al. Vehicle detection by means of stereo vision-based obstacles features extraction and monocular pattern analysis
US20090037039A1 (en) Method for locomotive navigation and track identification using video
CN105049784B (en) Method and apparatus for the sighting distance estimation based on image
US20070165910A1 (en) Vehicle surroundings monitoring apparatus, method, and program
CN111448478A (en) System and method for correcting high-definition maps based on obstacle detection
KR101569919B1 (en) Apparatus and method for estimating the location of the vehicle
US11798187B2 (en) Lane detection and distance estimation using single-view geometry
CN102565832A (en) Method of augmenting GPS or gps/sensor vehicle positioning using additional in-vehicle vision sensors
WO2012086821A1 (en) Positioning apparatus and positioning method
Shunsuke et al. GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon
CN111723778B (en) Vehicle distance measuring system and method based on MobileNet-SSD
WO2011160672A1 (en) Method for obtaining drivable road area
Mammeri et al. Extending the detection range of vision-based vehicular instrumentation
CN114248778B (en) Positioning method and positioning device of mobile equipment
CN116778224A (en) Vehicle tracking method based on video stream deep learning
CN114663852A (en) Method and device for constructing lane line graph, electronic equipment and readable storage medium
JP2022039188A (en) Position attitude calculation method and position attitude calculation program
US11120292B2 (en) Distance estimation device, distance estimation method, and distance estimation computer program
CN105021573A (en) Method and device for tracking-based visibility range estimation
JP2023152109A (en) Feature detection device, feature detection method and computer program for detecting feature
Gao et al. 3D reconstruction for road scene with obstacle detection feedback
JP2012203722A (en) Feature selection system, feature selection program, and feature selection method
CN113874681B (en) Evaluation method and system for point cloud map quality
Lin et al. Lane detection networks based on deep neural networks and temporal information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination