CN115615450A - Planning method, system and medium for reducing dependence on positioning and high-precision map of road section area under structured road - Google Patents
Planning method, system and medium for reducing dependence on positioning and high-precision map of road section area under structured road Download PDFInfo
- Publication number
- CN115615450A CN115615450A CN202211335304.4A CN202211335304A CN115615450A CN 115615450 A CN115615450 A CN 115615450A CN 202211335304 A CN202211335304 A CN 202211335304A CN 115615450 A CN115615450 A CN 115615450A
- Authority
- CN
- China
- Prior art keywords
- node
- positioning
- road
- planning
- structured
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3407—Route searching; Route guidance specially adapted for specific applications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
- G01C21/32—Structuring or formatting of map data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3446—Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3807—Creation or updating of map data characterised by the type of data
- G01C21/3815—Road data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3841—Data obtained from two or more sources, e.g. probe vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention relates to a planning method, a system and a medium for reducing the dependence on positioning and high-precision maps of a section area under a structured road, which comprises the following steps: g1: fusing the ground lane line image data and the travelable area point cloud data based on a preset road, and outputting a fused grid map; g2: setting a cost value k =100 for static obstacles and non-collision based on the fusion grid map, and setting a cost value k =20 for solid lines and non-crossing; g3: based on the fusion cost grid map, combining the high-precision map and navigation path data obtained by positioning, searching a driving path by adopting a mixed A-x algorithm, and outputting an optimal path; g4: and judging whether a dynamic barrier exists in front or not and overtaking or yielding the dynamic barrier according to the optimal path, and finally generating a planning track. The invention not only improves the adaptability and flexibility of automatic driving, but also further improves the accuracy of the vehicle running track.
Description
Technical Field
The invention relates to the technical field of unmanned vehicle automatic driving, in particular to a planning method, a planning system and a planning medium for reducing the dependence on positioning and high-precision maps of a road section area under a structured road.
Background
The mainstream technical solution for automatic driving generally consists of sensing, positioning, high-precision maps, planning and control modules. The high-precision map can provide a large range of static environment information for automatic driving by combining with the positioning information, but the scheme has two disadvantages, one is that the high-precision map depends on the positioning information with higher precision, and when the positioning is not accurate, the environment information provided by the high-precision map is not accurate, so that the driving direction of the automatic driving is influenced; secondly, the high-precision map has limitations, such as incapability of real-time updating, high cost and the like.
The patent aims to provide a planning method for reducing the dependence on positioning and high-precision maps for the road section area under the structured road, and adopts a simple map (such as only providing road-level navigation information) and rough positioning to plan driving in the road section area, thereby improving the adaptability and flexibility of automatic driving.
Disclosure of Invention
In view of the above problems, the present invention provides a planning method, system and medium for reducing the dependence on positioning and high-precision maps for a structured road segment area, which not only improves the adaptability and flexibility of automatic driving, but also further improves the accuracy of the vehicle driving track.
In order to achieve the above objects and other related objects, the present invention provides the following technical solutions:
a planning method for reducing the dependence on positioning and high-precision maps for the area of a road section under a structured road comprises the following steps:
g1: extracting a ground lane line through a vehicle-mounted camera based on a preset road, extracting a travelable area through laser radar point cloud, fusing ground lane line image data and travelable area point cloud data, and outputting a fusion grid map;
g2: setting a cost value k =100 for static obstacles which cannot collide, setting a cost value k =20 for solid lines which cannot be crossed, setting a cost value k =2 for broken lines which can be crossed, setting a cost value k =0 for non-above types, and outputting a fusion cost grid map;
g3: based on the fusion cost grid map, combining the high-precision map and navigation path data obtained by positioning, searching a driving path by adopting a mixed A-x algorithm, and outputting an optimal path;
g4: and judging whether a dynamic barrier exists in front or not and overtaking or yielding the dynamic barrier according to the optimal path, so as to obtain a speed curve and finally generate a planned track.
Further preferably, in step G3, the hybrid a algorithm comprises the following steps:
g31: determining a starting point according to the vehicle positioning and high-precision map based on the fusion cost grid map, and establishing an OpenList list and a CloseList list;
g32: taking out the node with the minimum f value from the OpenList list, accessing the node into the CloseList list, and obtaining a cost estimation value node.h based on a Reeds-Shepp curve;
g33: acquiring a cost value node.f from a starting point to a node according to the fusion cost grid map;
g34: and judging whether the vehicle reaches a target point or not based on the vehicle positioning and the high-precision map, if not, returning to the step G32, if so, judging whether an obstacle exists on the curve by using a Reeds-Shepp curve, if so, returning to the step G32, and otherwise, outputting a vehicle driving path.
Further preferably, the OpenList is a set of candidate search nodes, and f is a function of the quality of each Node.
Further optimized, node.f = node.h + node.g, where node.f is a cost value of a certain Node, node.h is a cost estimation value of continuing to extend from the current Node to the target Node, and node.g is a cost value consumed by extending from the source Node to the current Node.
Further optimally, in the step G1, the fusion of the ground lane line image data and the travelable area point cloud data includes the following steps:
g11: a preprocessing stage, based on ground lane line image data, adopting Gaussian filtering to remove Gaussian noise, and then filtering and eliminating the point cloud data of the travelable area;
g12: and performing multi-task learning on the image data of the ground lane lines and the point cloud data of the travelable area based on a SplatNet2D-3D network to obtain a fusion grid map.
Further preferably, in step G12, the SplatNet2D-3D network includes: CNN 1 、CNN 2 The module is used for extracting the features of the 2D image; BCL 2D→3D Module by CNN 1 Embedding the 2D image features into a 3D lattice space according to the extracted 2D image features; the 2D-3 DFuse module is used for jointly learning 2D image features and 3D point cloud features; BCL 3D→2D A module for mapping features embedded in 3D to features in 2D.
Further optimized, the BCL 3D→2D And the module interpolates the features after passing through the 2D-3D Fusion module into a 2D image space according to the relation between the 3D space coordinates and the 2D pixel coordinates of the feature points.
Further optimized, the BCL 2D→3D The module embeds the 2D image features into a 3D lattice space by computing the corresponding 3D coordinates of the pixels of the 2D features.
To achieve the above and other related objects, the present invention also provides a planning system for reducing the dependence on location and high-precision maps for a structured road segment area, comprising a computer device programmed or configured to execute the steps of any one of the methods for reducing the dependence on location and high-precision maps for a structured road segment area.
To achieve the above and other related objects, the present invention also provides a computer readable storage medium having stored thereon a computer program programmed or configured to execute any one of the methods for reducing the dependence on location and high-precision maps for a section area under a structured road.
The invention has the following positive effects:
1) According to the invention, the vehicle-mounted camera is used for acquiring the road lane line image, the laser radar is used for acquiring the point cloud data of the travelable area, and the two types of combination can be used for accurately guiding the vehicle traveling path.
2) The invention adopts high-precision map and navigation positioning as auxiliary navigation, thereby further improving the adaptability and flexibility of automatic driving.
3) According to the invention, based on the SplatNet2D-3D network, multi-task learning is carried out on the ground lane line image data and the travelable area point cloud data to obtain the fusion grid map, so that the accuracy of the fusion grid map is improved.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
fig. 2 is a flow chart of the hybrid a algorithm of the present invention.
Detailed Description
In order to explain the technical contents, the objects and the effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
Example (b): as shown in fig. 1, a planning method for reducing the dependence on positioning and high-precision maps for the area of a road segment under a structured road comprises the following steps:
g1: extracting a ground lane line through a vehicle-mounted camera based on a preset road, extracting a travelable area through laser radar point cloud, fusing ground lane line image data and travelable area point cloud data, and outputting a fused grid map;
g2: setting a cost value k =100 for a static obstacle which cannot collide, setting a cost value k =20 for a solid line which cannot be crossed, setting a cost value k =2 for a broken line which can be crossed to avoid long-time line pressing, setting a cost value k =0 for the types other than the above types, and outputting a fusion cost grid map;
g3: based on the fusion cost grid map, combining a high-precision map and navigation path data obtained by positioning, searching a driving path by adopting a hybrid A-x algorithm, and outputting an optimal path;
g4: and judging whether a dynamic barrier exists in front or not and overtaking or yielding the dynamic barrier according to the optimal path, so as to obtain a speed curve and finally generate a planned track.
As shown in fig. 2, in step G3, the mixing a algorithm comprises the following steps:
g31: determining a starting point according to vehicle positioning and a high-precision map based on the fusion cost grid map, and establishing an OpenList list and a CloseList list;
g32: taking out the node with the minimum f value from the OpenList list, accessing the node into the CloseList list, and obtaining a cost estimation value node.h based on a Reeds-Shepp curve;
g33: acquiring a cost value node.f from a starting point to a node according to the fusion cost grid map;
g34: and judging whether the vehicle reaches a target point or not based on the vehicle positioning and the high-precision map, if not, returning to the step G32, if so, judging whether an obstacle exists on the curve by using a Reeds-Shepp curve, if so, returning to the step G32, and otherwise, outputting a vehicle driving path.
The hybrid a-star algorithm maintains a set of candidate search nodes, which we call OpenList. In the hybrid a-star algorithm, whenever a node to be searched next needs to be specified, a node with the minimum f-index is extracted from an OpenList for implementation. Here, f is a function for measuring the quality of each Node, and the index of a certain Node, i.e., the cost value node.f, may be specifically divided into two parts, node.f = node.g + node.h. The node.g represents a cost value consumed by expanding from a source Node to a current Node, and corresponds to a cost grid map of a driving environment, wherein a node.f is a cost value corresponding to the Node in the grid map, a node.h represents a cost estimation value continuously expanded from the current Node to a target Node, the OpenList list is a set formed by alternative search nodes, and f is a function of the quality of each Node.
Node.f = node.h + node.g, where node.f is a cost value of a certain Node, node.h is a cost estimation value of continuing to extend from a current Node to a target Node, and node.g is a cost value consumed by extending from a source Node to the current Node.
In step G1, the fusion of the ground lane line image data and the travelable region point cloud data includes the following steps:
g11: a preprocessing stage, based on ground lane line image data, adopting Gaussian filtering to remove Gaussian noise, and then filtering and eliminating the point cloud data of the travelable area;
g12: and performing multi-task learning on the ground lane line image data and the travelable area point cloud data based on a SplatNet2D-3D network to obtain a fusion grid map.
Wherein, in step G12, the splattnet 2D-3D network includes: CNN 1 、CNN 2 The module is used for extracting the features of the 2D image; BCL 2D→3D Module by CNN 1 Embedding the 2D image features into a 3D lattice space according to the extracted 2D image features; a 2D-3 DFuse module for joint learning of 2D graphsImage features and 3D point cloud features; BCL 3D→2D A module for mapping features embedded in 3D to features in 2D.
Wherein BCL 3D→2D A module that aims to assist 2D prediction tasks, such as 2D semantic segmentation, etc., by mapping features embedded in 3D to features in 2D, i.e. with point cloud information. Specifically, the feature is interpolated into the 2D image space according to the relationship between the 3D space coordinates and the 2D pixel coordinates of the feature point, which is equivalent to the inverse process of 2D → 3D.
To achieve the above and other related objects, the present invention also provides a planning system for reducing the dependence on location and high-precision maps for a structured under-road segment area, comprising a computer device programmed or configured to execute the steps of any one of the methods for reducing the dependence on location and high-precision maps for a structured under-road segment area.
To achieve the above and other related objects, the present invention also provides a computer readable storage medium having stored thereon a computer program programmed or configured to execute any one of the methods for reducing the dependence on location and high-precision maps for a section area under a structured road.
In conclusion, the invention not only improves the adaptability and flexibility of automatic driving, but also further improves the accuracy of the vehicle running track.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention.
Claims (10)
1. A planning method for reducing the dependence on positioning and high-precision maps for the areas of road sections under a structured road is characterized by comprising the following steps:
g1: extracting a ground lane line through a vehicle-mounted camera based on a preset road, extracting a travelable area through laser radar point cloud, fusing ground lane line image data and travelable area point cloud data, and outputting a fused grid map;
g2: setting a cost value k =100 for a static obstacle which cannot collide, setting a cost value k =20 for a solid line which cannot be crossed, setting a cost value k =2 for a broken line which can be crossed, setting a cost value k =0 for other types than the above types, and outputting a fusion cost grid map;
g3: based on the fusion cost grid map, combining a high-precision map and navigation path data obtained by positioning, searching a driving path by adopting a hybrid A-x algorithm, and outputting an optimal path;
g4: and judging whether a dynamic barrier exists in front or not and overtaking or yielding the dynamic barrier according to the optimal path to obtain a speed curve and finally generating a planning track.
2. The method for reduced location and high accuracy map-dependent planning of a section area under a structured road according to claim 1, characterized in that: in step G3, the hybrid a algorithm comprises the following steps:
g31: determining a starting point according to vehicle positioning and a high-precision map based on the fusion cost grid map, and establishing an OpenList list and a CloseList list;
g32: taking out the node with the minimum f value from the OpenList list, accessing the node into the CloseList list, and obtaining a cost estimation value node.h based on a Reeds-Shepp curve;
g33: acquiring a cost value node.f from a starting point to a node according to the fusion cost grid map;
g34: and judging whether the vehicle reaches a target point or not based on the vehicle positioning and the high-precision map, if not, returning to the step G32, if so, judging whether an obstacle exists on the curve by using a Reeds-Shepp curve, if so, returning to the step G32, and otherwise, outputting a vehicle driving path.
3. A method of planning for a reduced dependence on positioning and high accuracy maps for areas of structured sub-road segments according to claim 2, characterized in that: the OpenList is a set formed by the alternative search nodes, and f is a function of the quality of each Node.
4. A method of planning for a reduced dependence on positioning and high accuracy maps for areas of structured sub-road segments according to claim 2, characterized in that: node.f = node.h + node.g, where node.f is a cost value of a certain Node, node.h is a cost estimation value of continuing to extend from a current Node to a target Node, and node.g is a cost value consumed by extending from a source Node to the current Node.
5. A method of planning for a reduced dependence on positioning and high accuracy maps for areas of structured sub-road segments according to claim 1, characterized by: in step G1, the fusion of the ground lane line image data and the travelable region point cloud data includes the following steps:
g11: a preprocessing stage, based on ground lane line image data, adopting Gaussian filtering to remove Gaussian noise, and then filtering and eliminating the point cloud data of the travelable area;
g12: and performing multi-task learning on the ground lane line image data and the travelable area point cloud data based on a SplatNet2D-3D network to obtain a fusion grid map.
6. A planning method for reducing the dependence on positioning and high precision maps for areas of structured sub-road segments according to claim 5, characterized in that: in step G12, the SplatNet2D-3D network includes: CNN 1 、CNN 2 The module is used for extracting the features of the 2D image; BCL 2D→3D Module by CNN 1 Embedding the 2D image features into a 3D lattice space according to the extracted 2D image features; the 2D-3 DFuse module is used for jointly learning 2D image features and 3D point cloud features; BCL 3D→2D A module for mapping features embedded in 3D to features in 2D.
7. Structured underslung road segment area reduction mapping according to claim 6The bit and high-precision map-dependent planning method is characterized by comprising the following steps: the BCL 3D→2D And the module interpolates the features after passing through the 2D-3D Fusion module into a 2D image space according to the relation between the 3D space coordinates and the 2D pixel coordinates of the feature points.
8. The method for structured road segment area planning with reduced dependence on localization and high accuracy maps according to claim 6, wherein: the BCL 2D→3D The module embeds the 2D image features into a 3D lattice space by computing the corresponding 3D coordinates of the pixels of the 2D features.
9. A planning system for reducing the dependency on positioning and high-precision maps of areas of structured road sections of roads, comprising a computer device, characterized in that the computer device is programmed or configured to perform the steps of the method for reducing the dependency on positioning and high-precision maps of areas of structured road sections of roads of any one of claims 1 to 8.
10. A computer-readable storage medium, characterized in that it has stored thereon a computer program that is programmed or configured to carry out a method of reducing the dependency on localization and high accuracy maps of an area of a structured sub-road segment according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211335304.4A CN115615450A (en) | 2022-10-28 | 2022-10-28 | Planning method, system and medium for reducing dependence on positioning and high-precision map of road section area under structured road |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211335304.4A CN115615450A (en) | 2022-10-28 | 2022-10-28 | Planning method, system and medium for reducing dependence on positioning and high-precision map of road section area under structured road |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115615450A true CN115615450A (en) | 2023-01-17 |
Family
ID=84876285
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211335304.4A Pending CN115615450A (en) | 2022-10-28 | 2022-10-28 | Planning method, system and medium for reducing dependence on positioning and high-precision map of road section area under structured road |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115615450A (en) |
-
2022
- 2022-10-28 CN CN202211335304.4A patent/CN115615450A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109785667B (en) | Lane departure recognition method, apparatus, device, and storage medium | |
CN111750886B (en) | Local path planning method and device | |
US11370422B2 (en) | Method and system in a vehicle for improving prediction results of an advantageous driver assistant system | |
CN112212874B (en) | Vehicle track prediction method and device, electronic equipment and computer readable medium | |
CN113405558B (en) | Automatic driving map construction method and related device | |
US20190005667A1 (en) | Ground Surface Estimation | |
EP3859273B1 (en) | Method for constructing driving coordinate system, and application thereof | |
CN112394725B (en) | Prediction and reaction field of view based planning for autopilot | |
CN113989451B (en) | High-precision map construction method and device and electronic equipment | |
CN114509065B (en) | Map construction method, system, vehicle terminal, server and storage medium | |
CN113701781B (en) | Matching lane searching method based on high-precision map and visual lane lines | |
CN113920198B (en) | Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment | |
CN114593739B (en) | Vehicle global positioning method and device based on visual detection and reference line matching | |
Camarda et al. | Fusion of evidential occupancy grids for cooperative perception | |
CN113566817B (en) | Vehicle positioning method and device | |
CN113178091B (en) | Safe driving area method, device and network equipment | |
CN111664856A (en) | Vehicle initial positioning system and vehicle initial positioning method | |
Karimi et al. | A methodology for predicting performances of map-matching algorithms | |
CN116560362A (en) | Automatic navigation path planning tracking method and system | |
CN115615450A (en) | Planning method, system and medium for reducing dependence on positioning and high-precision map of road section area under structured road | |
Elfring et al. | Vehicle localization using a traffic sign map | |
CN116793369B (en) | Path planning method, device, equipment and computer readable storage medium | |
CN113390422B (en) | Automobile positioning method and device and computer storage medium | |
CN115615420A (en) | SLAM automatic advancing method using characteristic structure in dynamic map | |
CN116499477B (en) | Map fusion method, device, medium and vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |