CN113516688A - Multidimensional intelligent positioning and tracking system for vehicle - Google Patents

Multidimensional intelligent positioning and tracking system for vehicle Download PDF

Info

Publication number
CN113516688A
CN113516688A CN202110815453.XA CN202110815453A CN113516688A CN 113516688 A CN113516688 A CN 113516688A CN 202110815453 A CN202110815453 A CN 202110815453A CN 113516688 A CN113516688 A CN 113516688A
Authority
CN
China
Prior art keywords
characteristic point
vehicle
scene image
tracking system
acquisition equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110815453.XA
Other languages
Chinese (zh)
Inventor
郭五洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Yunxi Communication Technology Co ltd
Original Assignee
Hefei Yunxi Communication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Yunxi Communication Technology Co ltd filed Critical Hefei Yunxi Communication Technology Co ltd
Priority to CN202110815453.XA priority Critical patent/CN113516688A/en
Publication of CN113516688A publication Critical patent/CN113516688A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a positioning and tracking system, in particular to a multidimensional intelligent positioning and tracking system for a vehicle, which comprises a video acquisition module, a first characteristic point, a second characteristic point, a first vehicle position information and a second vehicle position information, wherein the video acquisition module is arranged on the vehicle and used for shooting road images and scene images in the driving process of the vehicle; the technical scheme provided by the invention can effectively overcome the defect that the target vehicle cannot be accurately positioned and tracked in a complex traffic environment in the prior art.

Description

Multidimensional intelligent positioning and tracking system for vehicle
Technical Field
The invention relates to a positioning and tracking system, in particular to a multi-dimensional intelligent positioning and tracking system for a vehicle.
Background
With the rapid development of economic society in China, the logistics industry is rapidly developed, and the convenient and rapid vehicle transportation becomes an important logistics distribution mode due to the perfect highway infrastructure and the huge automobile market scale. The route accuracy, vehicle transportation efficiency and cargo safety of logistics vehicles in the distribution process influence the development of the logistics industry to a certain extent. Therefore, the logistics vehicle is monitored in real time through the positioning technology, the transportation information of the logistics vehicle is timely, accurately and comprehensively mastered, and the competitiveness of the logistics industry in the aspects of service, efficiency and safety can be effectively improved.
At present, a large number of target tracking algorithms are proposed and widely applied, and most target tracking and detection algorithms focus processing objects on RGB images, such as Fast RCNN, fasternn, YOLO, and the like, which are successful target detection algorithms based on deep learning. In addition, the online multi-target tracking method based on the Markov Decision (MDP) frame can only obtain better effect under the condition of uncomplicated road environment. However, with the rise of stereoscopic vision, people are dedicated to research on the feasibility of the target 3D frame tracking algorithm.
Commonly used vehicle tracking methods include: the method comprises the following steps of (1) determining a tracking target by assuming that a vehicle is a connected block formed by pixel points one by one according to a tracking algorithm of a region and calculating the similarity of the characteristics of the connected block and the detected characteristics of the connected block; and matching the established target model base with the detected moving target according to a tracking algorithm based on the model so as to achieve the aim of tracking. However, if facing a complex traffic environment, the above method still has certain disadvantages in solving the problems of target occlusion, complex motion modeling, and tradeoff of computational complexity and accuracy.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects in the prior art, the invention provides a multi-dimensional intelligent positioning and tracking system for vehicles, which can effectively overcome the defect that the prior art cannot accurately position and track a target vehicle in a complex traffic environment.
(II) technical scheme
In order to achieve the purpose, the invention is realized by the following technical scheme:
a multidimensional intelligent positioning and tracking system for a vehicle comprises a video acquisition module which is arranged on the vehicle and used for shooting road images and scene images in the driving process of the vehicle, wherein the multidimensional intelligent positioning and tracking system obtains a first characteristic point according to adjacent frame scene images, calculates the spatial position of the first characteristic point by combining the working state of acquisition equipment, calculates the spatial position of a second characteristic point between a next frame scene image and a next frame scene image by using the spatial position of the first characteristic point, and obtains first vehicle position information according to the spatial position of the second characteristic point;
the multi-dimensional intelligent positioning and tracking system obtains second vehicle position information according to the road surface image and the scene image, and comprehensively judges the vehicle position by combining the first vehicle position information and the second vehicle position information.
Preferably, the method further includes a first spatial position calculation unit that calculates a spatial position of the first feature point, the first spatial position calculation unit including:
the first characteristic point determining module is used for determining a first characteristic point between adjacent frame scene images;
the first data calculation module calculates a first transformation matrix between adjacent frame scene images based on the first characteristic point;
the acquisition equipment state acquisition module is used for acquiring working state parameters of the video acquisition equipment;
the first depth calculation module is used for calculating the depth value of the first characteristic point based on the first transformation matrix and the working state parameter of the video acquisition equipment;
and the first spatial position calculation module is used for calculating the three-dimensional spatial coordinates of the first characteristic points based on the depth values of the first characteristic points, the first transformation matrix and the working state parameters of the video acquisition equipment.
Preferably, the first data calculation module calculates a first rotation matrix and a first translation matrix between adjacent scene images based on the first feature point.
Preferably, the operating state parameters of the video capture device include: the three-dimensional space coordinates of the video acquisition equipment and the moving distance of the video acquisition equipment for shooting the scene images of the adjacent frames.
Preferably, the method further includes a second spatial position calculation unit that calculates a spatial position of a second feature point between a subsequent scene image and a next scene image using the spatial position of the first feature point, and the second spatial position calculation unit includes:
the second characteristic point determining module is used for determining a second characteristic point between the next frame of scene image and the next frame of scene image;
the second data calculation module is used for calculating a second transformation matrix between the next frame of scene image and the next frame of scene image based on the second characteristic points;
the second depth calculation module is used for calculating the depth value of the second characteristic point based on the depth value of the first characteristic point and the second transformation matrix;
and the second space position calculation module is used for calculating the three-dimensional space coordinate of the second characteristic point based on the three-dimensional space coordinate of the first characteristic point, the depth value of the second characteristic point and the second transformation matrix.
Preferably, the second data calculation module calculates a second rotation matrix and a second translation matrix between a subsequent scene image and a next scene image based on the second feature point.
Preferably, the next scene image is a next scene image in the adjacent scene images.
Preferably, the vehicle position information processing device further comprises a precise positioning unit for obtaining first vehicle position information according to the spatial position of the second feature point, wherein the precise positioning unit comprises:
the acquisition equipment pose determining module is used for determining the pose of the video acquisition equipment when the video acquisition equipment shoots the next frame of scene image based on the second characteristic point and the three-dimensional space coordinate of the second characteristic point;
and the first vehicle position judging module is used for determining the three-dimensional space coordinates of the vehicle in the next frame of scene image according to the pose of the video acquisition equipment when the video acquisition equipment shoots the next frame of scene image, so as to obtain the first vehicle position information.
Preferably, the vehicle further includes a dynamic fusion positioning unit for obtaining second vehicle position information of the second vehicle position information according to the road surface image and the scene image, and the dynamic fusion positioning unit includes:
the visual map database building module is used for building a visual map based on the visual map database;
and the second vehicle position judging module is used for comprehensively determining the three-dimensional space coordinates of the vehicle through the road surface images and the scene images of the continuous frames under the condition of constructing the visual map to obtain the second vehicle position information.
Preferably, the vehicle position comprehensive judgment module is further included for comprehensively judging the vehicle position according to the first vehicle position information obtained by the precise positioning unit and the second vehicle position information obtained by the dynamic fusion positioning unit.
(III) advantageous effects
Compared with the prior art, the multi-dimensional intelligent positioning and tracking system for the vehicle can continuously obtain the accurate positioning information of the target vehicle in the continuous frame scene images by utilizing the scene images acquired by the video acquisition module to obtain the first vehicle position information, can comprehensively obtain the second vehicle position information by utilizing the continuous frame road surface images and the continuous frame scene images based on the visual odometer and the visual map, and comprehensively judges the vehicle position according to the first vehicle position information and the second vehicle position information, so that the target vehicle can be accurately positioned and tracked in the complex traffic environment.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a schematic diagram of the system of the present invention;
FIG. 2 is a schematic diagram of the system for dynamically fusing the positioning units shown in FIG. 1 according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A multidimensional intelligent positioning and tracking system for a vehicle is shown in figure 1 and comprises a video acquisition module which is installed on the vehicle and used for shooting road images and scene images in the driving process of the vehicle, wherein the multidimensional intelligent positioning and tracking system obtains first feature points according to adjacent frame scene images, calculates the spatial position of the first feature points by combining the working state of acquisition equipment, calculates the spatial position of a second feature point between a next frame scene image and a next frame scene image by using the spatial position of the first feature points, and obtains first vehicle position information according to the spatial position of the second feature points.
The first spatial position calculating unit is configured to calculate a spatial position of the first feature point, and specifically includes:
the first characteristic point determining module is used for determining a first characteristic point between adjacent frame scene images;
the first data calculation module calculates a first transformation matrix between adjacent frame scene images based on the first characteristic point;
the acquisition equipment state acquisition module is used for acquiring working state parameters of the video acquisition equipment;
the first depth calculation module is used for calculating the depth value of the first characteristic point based on the first transformation matrix and the working state parameter of the video acquisition equipment;
and the first spatial position calculation module is used for calculating the three-dimensional spatial coordinates of the first characteristic points based on the depth values of the first characteristic points, the first transformation matrix and the working state parameters of the video acquisition equipment.
The first data calculation module calculates a first rotation matrix and a first translation matrix between adjacent scene images based on the first feature points. The working state parameters of the video acquisition equipment comprise: the three-dimensional space coordinates of the video acquisition equipment and the moving distance of the video acquisition equipment for shooting the scene images of the adjacent frames.
The second spatial position calculation unit calculates the spatial position of a second feature point between the subsequent scene image and the next scene image using the spatial position of the first feature point, and includes:
the second characteristic point determining module is used for determining a second characteristic point between the next frame of scene image and the next frame of scene image;
the second data calculation module is used for calculating a second transformation matrix between the next frame of scene image and the next frame of scene image based on the second characteristic points;
the second depth calculation module is used for calculating the depth value of the second characteristic point based on the depth value of the first characteristic point and the second transformation matrix;
and the second space position calculation module is used for calculating the three-dimensional space coordinate of the second characteristic point based on the three-dimensional space coordinate of the first characteristic point, the depth value of the second characteristic point and the second transformation matrix.
And the second data calculation module calculates a second rotation matrix and a second translation matrix between the next frame of scene image and the next frame of scene image based on the second feature points.
The accurate positioning unit obtains first vehicle position information according to the spatial position of the second characteristic point, and specifically includes:
the acquisition equipment pose determining module is used for determining the pose of the video acquisition equipment when the video acquisition equipment shoots the next frame of scene image based on the second characteristic point and the three-dimensional space coordinate of the second characteristic point;
and the first vehicle position judging module is used for determining the three-dimensional space coordinates of the vehicle in the next frame of scene image according to the pose of the video acquisition equipment when the video acquisition equipment shoots the next frame of scene image, so as to obtain the first vehicle position information.
In the technical scheme of the application, the next frame of scene image is the next frame of scene image in the adjacent frame of scene images. The adjacent frame scene picture includes t1Temporal frame scene picture p1And t and1frame scene picture p at time t +2And the next scene picture is referred to as t1Frame scene image p at time +2 Δ t3. Of course, the technology of this applicationThe technical scheme is not limited to the three frames of scene images, and the three frames of scene images can be continuously extended backwards according to the sequence of the video stream, so that the scene images acquired by the video acquisition module can be utilized to continuously obtain the accurate positioning information of the target vehicle in the continuous frames of scene images.
As shown in fig. 2, the multidimensional intelligent positioning and tracking system obtains second vehicle position information according to the road surface image and the scene image, and comprehensively determines the vehicle position by combining the first vehicle position information and the second vehicle position information.
The dynamic fusion positioning unit obtains second vehicle position information according to the road surface image and the scene image, and specifically comprises:
the visual map database building module is used for building a visual map based on the visual map database;
and the second vehicle position judging module is used for comprehensively determining the three-dimensional space coordinates of the vehicle through the road surface images and the scene images of the continuous frames under the condition of constructing the visual map to obtain the second vehicle position information.
In the technical scheme, if the visual map is not constructed, the acquired road surface image is matched with the features through an optical flow method, and position acquisition is carried out based on the visual odometer. If the visual map is constructed, the position of the vehicle is acquired by adopting a vehicle positioning technology constructed based on the visual map according to the acquired scene image, and the three-dimensional space coordinates of the vehicle can also be comprehensively determined by adopting the two methods.
In order to solve the defects that the visual odometer runs for a long time and the estimation result diverges, a static environment characteristic is required to be introduced to update the position of the vehicle, so that a continuous accurate track can be obtained, and the accumulated error can be reduced.
In the technical scheme of the application, the vehicle position comprehensive judgment module is further used for comprehensively judging the position of the vehicle according to the first vehicle position information obtained by the accurate positioning unit and the second vehicle position information obtained by the dynamic fusion positioning unit.
The first vehicle position information obtained by the accurate positioning unit and the second vehicle position information obtained by the dynamic fusion positioning unit are comprehensively judged through the vehicle position comprehensive judgment module, so that the obtained positioning result is more accurate, and the target vehicle can be accurately positioned and tracked in a complex traffic environment.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. A multi-dimensional intelligent positioning and tracking system for a vehicle, characterized in that: the multi-dimensional intelligent positioning and tracking system obtains a first characteristic point according to adjacent frames of scene images, calculates the spatial position of the first characteristic point by combining the working state of acquisition equipment, calculates the spatial position of a second characteristic point between a next frame of scene image and a next frame of scene image by using the spatial position of the first characteristic point, and obtains first vehicle position information according to the spatial position of the second characteristic point;
the multi-dimensional intelligent positioning and tracking system obtains second vehicle position information according to the road surface image and the scene image, and comprehensively judges the vehicle position by combining the first vehicle position information and the second vehicle position information.
2. The multi-dimensional intelligent position tracking system for vehicles according to claim 1, characterized in that: further comprising a first spatial position calculation unit that calculates a spatial position of the first feature point, the first spatial position calculation unit including:
the first characteristic point determining module is used for determining a first characteristic point between adjacent frame scene images;
the first data calculation module calculates a first transformation matrix between adjacent frame scene images based on the first characteristic point;
the acquisition equipment state acquisition module is used for acquiring working state parameters of the video acquisition equipment;
the first depth calculation module is used for calculating the depth value of the first characteristic point based on the first transformation matrix and the working state parameter of the video acquisition equipment;
and the first spatial position calculation module is used for calculating the three-dimensional spatial coordinates of the first characteristic points based on the depth values of the first characteristic points, the first transformation matrix and the working state parameters of the video acquisition equipment.
3. The multi-dimensional intelligent position tracking system for vehicles according to claim 2, characterized in that: the first data calculation module calculates a first rotation matrix and a first translation matrix between adjacent scene images based on the first feature points.
4. The multi-dimensional intelligent position tracking system for vehicles of claim 3, wherein: the working state parameters of the video acquisition equipment comprise: the three-dimensional space coordinates of the video acquisition equipment and the moving distance of the video acquisition equipment for shooting the scene images of the adjacent frames.
5. The multi-dimensional intelligent position tracking system for vehicles according to claim 2, characterized in that: the second spatial position calculation unit calculates a spatial position of a second feature point between a subsequent scene image and a next scene image by using the spatial position of the first feature point, and the second spatial position calculation unit includes:
the second characteristic point determining module is used for determining a second characteristic point between the next frame of scene image and the next frame of scene image;
the second data calculation module is used for calculating a second transformation matrix between the next frame of scene image and the next frame of scene image based on the second characteristic points;
the second depth calculation module is used for calculating the depth value of the second characteristic point based on the depth value of the first characteristic point and the second transformation matrix;
and the second space position calculation module is used for calculating the three-dimensional space coordinate of the second characteristic point based on the three-dimensional space coordinate of the first characteristic point, the depth value of the second characteristic point and the second transformation matrix.
6. The multi-dimensional intelligent position tracking system for vehicles of claim 5, wherein: and the second data calculation module calculates a second rotation matrix and a second translation matrix between the next frame of scene image and the next frame of scene image based on the second characteristic points.
7. The multi-dimensional intelligent positioning and tracking system for vehicles according to claim 5 or 6, characterized in that: and the next frame of scene image is a next frame of scene image in the adjacent frame of scene images.
8. The multi-dimensional intelligent position tracking system for vehicles of claim 5, wherein: still include the accurate positioning unit who obtains first vehicle positional information according to the spatial position of second characteristic point, accurate positioning unit includes:
the acquisition equipment pose determining module is used for determining the pose of the video acquisition equipment when the video acquisition equipment shoots the next frame of scene image based on the second characteristic point and the three-dimensional space coordinate of the second characteristic point;
and the first vehicle position judging module is used for determining the three-dimensional space coordinates of the vehicle in the next frame of scene image according to the pose of the video acquisition equipment when the video acquisition equipment shoots the next frame of scene image, so as to obtain the first vehicle position information.
9. The multi-dimensional intelligent position tracking system for vehicles of claim 8, wherein: the dynamic fusion positioning unit is used for obtaining second vehicle position information of the second vehicle position information according to the road surface image and the scene image, and comprises:
the visual map database building module is used for building a visual map based on the visual map database;
and the second vehicle position judging module is used for comprehensively determining the three-dimensional space coordinates of the vehicle through the road surface images and the scene images of the continuous frames under the condition of constructing the visual map to obtain the second vehicle position information.
10. The multi-dimensional intelligent position tracking system for vehicles of claim 9, wherein: the vehicle position comprehensive judgment module is used for comprehensively judging the position of the vehicle according to the first vehicle position information obtained by the accurate positioning unit and the second vehicle position information obtained by the dynamic fusion positioning unit.
CN202110815453.XA 2021-07-19 2021-07-19 Multidimensional intelligent positioning and tracking system for vehicle Pending CN113516688A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110815453.XA CN113516688A (en) 2021-07-19 2021-07-19 Multidimensional intelligent positioning and tracking system for vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110815453.XA CN113516688A (en) 2021-07-19 2021-07-19 Multidimensional intelligent positioning and tracking system for vehicle

Publications (1)

Publication Number Publication Date
CN113516688A true CN113516688A (en) 2021-10-19

Family

ID=78067400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110815453.XA Pending CN113516688A (en) 2021-07-19 2021-07-19 Multidimensional intelligent positioning and tracking system for vehicle

Country Status (1)

Country Link
CN (1) CN113516688A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160232410A1 (en) * 2015-02-06 2016-08-11 Michael F. Kelly Vehicle speed detection
CN109752008A (en) * 2019-03-05 2019-05-14 长安大学 Intelligent vehicle multi-mode co-located system, method and intelligent vehicle
CN111060924A (en) * 2019-12-02 2020-04-24 北京交通大学 SLAM and target tracking method
CN112767480A (en) * 2021-01-19 2021-05-07 中国科学技术大学 Monocular vision SLAM positioning method based on deep learning
CN113009533A (en) * 2021-02-19 2021-06-22 智道网联科技(北京)有限公司 Vehicle positioning method and device based on visual SLAM and cloud server

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160232410A1 (en) * 2015-02-06 2016-08-11 Michael F. Kelly Vehicle speed detection
CN109752008A (en) * 2019-03-05 2019-05-14 长安大学 Intelligent vehicle multi-mode co-located system, method and intelligent vehicle
CN111060924A (en) * 2019-12-02 2020-04-24 北京交通大学 SLAM and target tracking method
CN112767480A (en) * 2021-01-19 2021-05-07 中国科学技术大学 Monocular vision SLAM positioning method based on deep learning
CN113009533A (en) * 2021-02-19 2021-06-22 智道网联科技(北京)有限公司 Vehicle positioning method and device based on visual SLAM and cloud server

Similar Documents

Publication Publication Date Title
CN111693972B (en) Vehicle position and speed estimation method based on binocular sequence images
Kanhere et al. Vehicle segmentation and tracking from a low-angle off-axis camera
CN111932580A (en) Road 3D vehicle tracking method and system based on Kalman filtering and Hungary algorithm
Arróspide et al. Homography-based ground plane detection using a single on-board camera
Nedevschi et al. A sensor for urban driving assistance systems based on dense stereovision
CN111563415A (en) Binocular vision-based three-dimensional target detection system and method
CN113538410A (en) Indoor SLAM mapping method based on 3D laser radar and UWB
CN104636724B (en) A kind of quick Pedestrians and vehicles detection method of in-vehicle camera based on goal congruence
US11721028B2 (en) Motion segmentation in video from non-stationary cameras
CN113029185B (en) Road marking change detection method and system in crowdsourcing type high-precision map updating
CN112906777A (en) Target detection method and device, electronic equipment and storage medium
CN112862858A (en) Multi-target tracking method based on scene motion information
CN109917359A (en) Robust vehicle distances estimation method based on vehicle-mounted monocular vision
CN112541938A (en) Pedestrian speed measuring method, system, medium and computing device
Dornaika et al. A new framework for stereo sensor pose through road segmentation and registration
KR100574227B1 (en) Apparatus and method for separating object motion from camera motion
Wang et al. Geometry constraints-based visual rail track extraction
Omar et al. Detection and localization of traffic lights using YOLOv3 and Stereo Vision
Giosan et al. Superpixel-based obstacle segmentation from dense stereo urban traffic scenarios using intensity, depth and optical flow information
Fakhfakh et al. Weighted v-disparity approach for obstacles localization in highway environments
CN116804553A (en) Odometer system and method based on event camera/IMU/natural road sign
Nishigaki et al. Moving obstacle detection using cameras for driver assistance system
CN113516688A (en) Multidimensional intelligent positioning and tracking system for vehicle
KR20220151572A (en) Method and System for change detection and automatic updating of road marking in HD map through IPM image and HD map fitting
CN114565669A (en) Method for fusion positioning of field-end multi-camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination