CN113734176A - Environment sensing system and method for intelligent driving vehicle, vehicle and storage medium - Google Patents

Environment sensing system and method for intelligent driving vehicle, vehicle and storage medium Download PDF

Info

Publication number
CN113734176A
CN113734176A CN202111100769.7A CN202111100769A CN113734176A CN 113734176 A CN113734176 A CN 113734176A CN 202111100769 A CN202111100769 A CN 202111100769A CN 113734176 A CN113734176 A CN 113734176A
Authority
CN
China
Prior art keywords
vehicle
point cloud
grid
radar
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111100769.7A
Other languages
Chinese (zh)
Inventor
彭祥军
王宽
闫耀威
程光凯
林鑫余
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202111100769.7A priority Critical patent/CN113734176A/en
Publication of CN113734176A publication Critical patent/CN113734176A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an environment sensing system, a method, a vehicle and a storage medium for intelligently driving the vehicle, wherein a point cloud UDP (user Datagram protocol) packet output by a laser radar is obtained and is connected to an original point cloud driving module for processing to obtain a point cloud data format; obtaining IMU information, obtaining a conversion relation between the radar position of each point cloud and the initial position by combining the IMU information, and converting all points of each frame to the initial radar position; combining the point cloud after distortion compensation with a radar calibration file to complete conversion from a radar coordinate system to a vehicle body coordinate system to obtain point cloud under the vehicle body coordinate system; acquiring RTK positioning information and high-precision map information, and judging whether an RTK signal exists or not; if no RTK signal exists, judging whether the point cloud is in the lane line, if so, reserving and establishing a grid map, otherwise, filtering the point cloud; and carrying out target clustering and visualization according to the grid occupation state. The invention can avoid filtering out the traffic participants of other lanes at the intersection.

Description

Environment sensing system and method for intelligent driving vehicle, vehicle and storage medium
Technical Field
The invention belongs to the technical field of environment perception of intelligent driving vehicles of automobiles, and particularly relates to an environment perception system and method of an intelligent driving vehicle, a vehicle and a storage medium.
Background
The intelligent driving vehicle needs the sensing system to keep stable detection in different environments and working conditions, and generally comprises millimeter waves, ultrasonic waves, panoramic vision and laser radars, wherein a vision sensor serving as a main sensor sensing system is greatly influenced by light, the adaptability and robustness of the sensing system can be effectively improved by fusing multiple laser radars for target detection, and particularly, the condition that the target detection capability of the vision sensor is insufficient or fails due to weak light is adopted. In addition, compared with a single radar system, the sensing system combining multiple radars, RTK and a high-precision map has smaller blind area, has stronger sensing capability on the surrounding environment, and can effectively enhance the sensing safety performance of the system and improve the user experience. For example, patent document CN111060923A discloses a method and system for detecting driving obstacles by multiple laser radars, which combines a high-precision map with a laser radar, and needs to determine the ROI area by means of other sensors, and there is no alternative after losing the high-precision map, and the high-frequency scene of the intersection is not considered. For example, patent document CN111413684A discloses a method based on multi-lidar data fusion, which can effectively make full use of data of multiple lidars, but has the disadvantages of complicated conversion, time-consuming processing, no expansibility, and good practicability. In fact, for the lidar, objects with significant height outside the road often cause serious false detection and time consumption, resulting in degraded user experience.
Therefore, there is a need to develop a new environment sensing system, method, vehicle and storage medium for an intelligent driving vehicle.
Disclosure of Invention
The invention aims to provide an environment sensing system, an environment sensing method, a vehicle and a storage medium of an intelligent driving vehicle, which can accurately establish an area of interest when a high-precision map and an RTK signal are lost, and can avoid filtering out traffic participants of other lanes at an intersection.
In a first aspect, the present invention provides a method for sensing an environment of an intelligent driving vehicle, comprising the steps of:
acquiring a point cloud UDP packet output by a laser radar, and accessing the point cloud UDP packet to an original point cloud driving module for processing to obtain a point cloud data format;
obtaining IMU information, obtaining a conversion relation between a radar position where each point cloud is located and an initial position by combining the IMU information, and converting all points of each frame to the initial radar position, namely performing point cloud distortion compensation;
combining the point cloud after distortion compensation with a radar calibration file to complete conversion from a radar coordinate system to a vehicle body coordinate system to obtain point cloud under the vehicle body coordinate system;
acquiring RTK positioning information and high-precision map information, and judging whether an RTK signal exists or not;
if the RTK signal exists, judging whether the vehicle is positioned at the intersection; if the point cloud is located at the intersection, fitting the chain facility information to a road side boundary line through spline interpolation, judging whether the point cloud is in the lane line or not by combining with self-vehicle positioning information given by RTK (real time kinematic), if so, retaining and establishing a grid map, otherwise, carrying out point cloud filtering; if the grid map is not positioned at the intersection, reserving and establishing the grid map;
if no RTK signal exists, fitting a road side boundary line through spline interpolation according to a road isolation zone and a road edge detected by a self-vehicle sensor, judging whether the point cloud is in a lane line, if so, retaining and establishing a grid map, otherwise, filtering the point cloud;
after a complete grid map is obtained, target clustering and visualization are performed according to the grid occupation state.
Optionally, establishing a grid map, specifically:
setting the size and the resolution of a grid map, and determining a grid area, namely a forward detection distance, a backward detection distance and a lateral detection distance, by combining the position of a vehicle in a grid;
and judging whether the laser radar point under the vehicle body coordinate is a noise point, if so, not placing the grid, if not, placing the grid into the grid, and judging whether the laser radar point is a target grid, if so, clustering the target, and if not, clustering the target.
Optionally, for the near point cloud, calculating a height difference between the highest point and the lowest point in the grid, and if the height difference is greater than a first preset threshold, determining that the grid is a target grid, otherwise, determining that the grid is a non-target grid;
and judging whether the highest point in the grid is higher than a second preset threshold value or not for the far point cloud, if so, judging as a target grid, otherwise, judging as a non-target grid.
Optionally, the specific steps of target clustering include:
and (3) clustering the targets by adopting a flooding method, judging whether the target grid meets preset constraint conditions, if not, deleting the targets, and if so, outputting and visualizing the targets.
Optionally, each udp packet is composed of several blocks, and each block contains data of all laser transmitters of the laser radar at the same time.
Optionally, before accessing the original point cloud driving module, loading a corresponding parameter file of the laser radar for correction.
In a second aspect, the present invention provides a system for sensing environment of a smart driving vehicle, comprising:
the laser radars are respectively arranged on the top of the vehicle and the left side and the right side of the vehicle;
an RTK device mounted on the vehicle for outputting centimeter-level high-precision vehicle position information;
the IMU is arranged on the vehicle and used for detecting inertia measurement information of the vehicle;
a high-precision map for outputting road environment information;
and a controller connected to each of the laser radar, the RTK device, the IMU, and the high-precision map, respectively, the controller being configured to be able to perform the steps of the environment sensing method of the intelligent driving vehicle according to the present invention.
In a third aspect, the invention provides a vehicle, which adopts the environment sensing system of the intelligent driving vehicle.
In a fourth aspect, the present invention provides a storage medium having a computer readable program stored therein, where the computer readable program is capable of executing the steps of the method for intelligently sensing the environment of a vehicle according to the present invention when the computer readable program is called.
The invention has the following advantages: when the high-precision map and the RTK signal are lost, the method of road edge detection and boundary line fitting is adopted, the surrounding environment of the vehicle can be effectively divided into an in-road (including road edges) subspace and an out-road subspace, and the sensing module can more accurately establish an interested area, so that an environment sensing system based on the laser radar, the RTK and the high-precision map, which has moderate and extensible information, can work independently and has strong practicability, is established. The invention can also avoid filtering out the traffic participants of other lanes at the intersection.
Drawings
FIG. 1 is a flowchart of the present embodiment;
FIG. 2 is a flowchart of creating a target grid map in the present embodiment;
FIG. 3 is a flowchart of object clustering in the present embodiment;
FIG. 4 is a functional block diagram of the present embodiment;
FIG. 5 is a schematic view showing the installation positions of three lidar devices according to the present embodiment;
in the figure, 1-main radar, 2-side radar, 3-IMU, 4-controller, 5-high precision map, 6-RTK equipment and 7-vehicle.
Detailed Description
The invention will be further explained with reference to the drawings.
As shown in fig. 4 and 5, in the present embodiment, an environment sensing system for a smart driving vehicle includes: laser radars respectively installed on the top of the vehicle 7 and on the left and right sides of the vehicle 7; an RTK device 6 mounted on the vehicle for outputting centimeter-level high-precision vehicle position information; an IMU3 mounted on the vehicle for detecting inertial measurement information of the vehicle; a high-precision map 5 for outputting road environment information; and a controller 4, wherein the controller 4 is respectively connected with each laser radar, the IMU3, the RTK equipment 6 and the high-precision map 5.
The laser radar installed on the roof is the main radar 1, the measuring distance of the laser radar can reach 150m under the reflectivity of 10%, and in addition, the target can be better depicted by smaller vertical angle resolution and horizontal angle resolution, so that the number of point clouds hitting the target object is larger, the point cloud lines are denser, the front target can be found earlier, and further more time is reserved for planning control. But because the main radar 1 is vertically installed at a high distance from the ground, it has a large blind area around the own vehicle. Therefore, the laser radars with relatively fewer wire harnesses are required to be installed at the lower positions of the two sides of the vehicle, a certain installation inclination angle is artificially set according to needs, and the blind area can be effectively reduced. The laser radars installed on both right and left sides of the vehicle 7 are referred to as side radars 2. The side radar 2 is in locomotive both sides or rear-view mirror below, and the automobile body rear radar can be installed as required, and the system frame has the interface of predetermineeing, can insert newly-increased radar data fast. The RTK device 6 comprises an RTK antenna and a receiver which are mainly arranged at the rear part of the vehicle body, and the main principle is that the interference is small and the influence on the appearance of the vehicle body is reduced to the maximum extent.
In this embodiment, the multiple radar original data of the environment sensing system of the intelligent driving vehicle needs to be converted into a structure preset by a program through driving, the laser radar original data structure is sent in the form of udp packets, each udp packet is composed of a plurality of blocks, each block contains data of all laser transmitters of the laser radar at the same time, and the udp packets are stored in the form of single lines and single frames through special radar data driving in the computing and processing unit. In addition, the conversion driving from different radar point cloud data to the vehicle body coordinate system is obtained by original radar calibration information on the basis of inertial navigation unit data correction, and then the radar point cloud data is transmitted to a sensing system software processing unit at a frequency higher than the point cloud scanning frequency, and the main input interface of the software comprises: single-line single-frame point cloud data after point cloud driving processing and conversion matrixes of different radar-vehicle bodies after conversion driving processing.
For received high-precision map chain facility information, due to the fact that the map chain facility information is a discrete sequence, a curve is fitted through interpolation, the curve is a dividing line of a road space and a non-road space, the RTK equipment 6 is combined with the self-vehicle positioning information, when a reflection point is located in the non-road space, the reflection point is filtered, otherwise, if the reflection point is located in the road space, the reflection point is used for subsequent processing, false detection and calculation power consumption caused by cloud targets of points outside the road are greatly reduced, time delay of a perception system is reduced, and user experience is improved.
In the embodiment, after the data of the environment sensing system of the intelligent driving vehicle is converted into the vehicle body coordinate system, filtering is performed through priori knowledge so as to reduce the subsequent computational power and the load of a grid map. Related parameters of the raster map can be modified through configuration files according to an actual scene, and better matching of scene-program and computational power-efficiency is achieved. And projecting data of different radars to a grid plane (XY plane), putting all point clouds in the grid process of different IDs according to the (x, y) position information of the point clouds, and deleting abnormal point clouds through a denoising module in the process so as to reduce the interference of noise points on a sensing system. After the point cloud projection is finished, according to the characteristic that the point cloud is long-distance and sparse, different strategies are adopted for distinguishing the attributes of the grids at different distances, the probability of false detection and missed detection can be effectively reduced, and the establishment of an effective grid map has important influence on the subsequent target clustering effect and efficiency.
In this embodiment, the environment sensing system for the intelligent driving vehicle performs, at the beginning of a program, fitting a curve according to spline interpolation by using chain facility discrete sequence points within 200 meters ahead of a high-precision map 5, so as to form a dividing line between a road space and a non-road space, and then judging whether the point is located in the road space according to coordinate information of an object reflection point, if so, filtering, and performing subsequent processing on points in the road. For the treatment of the intersection, if the method is adopted, the traffic participants of other lanes are filtered, and a large missing detection risk is brought. For this situation, it is necessary to create an intersection template, determine whether the intersection is an intersection by line detection, then create an electronic fence (a closed polygonal area formed by multiple points) by means of high-precision map information, and filter out the reflection point when the reflection point is located inside the electronic fence. Because the process is essentially a searching process, if the searching area is too large, the real-time performance of the algorithm is reduced, so when the electronic fence is established, the process should be as concise and accurate as possible, and when the RTK signal is lost and cannot be accurately positioned, the road boundary line is fitted according to the detection of the road edge and the isolation zone within 100 meters in front of the vehicle.
For point clouds in a road, the whole image is traversed and clustered by utilizing target detection of a grid map, considering that the point clouds have sparsity at a far position, and a clustering threshold value needs to be adaptively modified according to distance, so that over-segmentation and under-segmentation of target detection can be effectively reduced, targets which are clustered are screened, a part of targets which do not meet conditions are filtered, and the screening conditions can be flexibly modified through a configuration file so as to meet the requirements of a scene. In the target visualization stage, the visualization configuration parameters are modified to set and display the two-dimensional rectangular frame, the original two-dimensional convex hull and the three-dimensional frame.
As shown in fig. 1, in this embodiment, a method for sensing an environment of an intelligent driving vehicle includes:
under the condition that power supply meets conditions and the self state of the main radar 1 and the side radar 2 is normal, a point cloud UDP packet can be generated according to a set strategy, factory-set parameters of all radars are different, corresponding parameter files need to be loaded for correction, then the point cloud driving modules are accessed together, and finally a predefined laser radar point cloud data format is obtained. In the vehicle motion process, laser radar constantly acquires the information of surrounding environment according to the preset strategy, the expression is a round of point cloud, but because radar itself is along with the motion of automobile body, make the point of same frame point cloud the inside gather in different positions, also be exactly the point cloud distortion, point cloud distortion brings negative effects to subsequent processing precision, consequently need combine IMU 3's information to try to get out the conversion relation of the radar position and the initial position at each point cloud place, and then all the points of each frame are converted to the initial radar position, this is exactly the point cloud distortion compensation. And the point cloud after distortion compensation is converted from a radar coordinate system to a vehicle body coordinate system through a calibration file to obtain the point cloud under the vehicle body coordinate system. In order to reduce the computation amount and improve the efficiency, the real-time RTK positioning information and the high-precision map information are received to judge whether an RTK signal exists, if so, chain facility information in 200m in front of a vehicle is pressed into a queue, and then whether the road junction exists is judged; if the point cloud is located at the intersection, fitting the chain facility information to a road side boundary line through spline interpolation, judging whether the point cloud is in the lane line or not by combining with self-vehicle positioning information given by RTK (real time kinematic), if so, retaining and establishing a grid map, otherwise, carrying out point cloud filtering; if the grid map is not positioned at the intersection, the grid map is reserved and established. If no RTK signal exists, detecting a road isolation zone and a road edge according to the periphery of the self vehicle, fitting a road boundary line through spline interpolation, judging whether the point cloud is in the lane line, if so, retaining and establishing a grid map, otherwise, filtering the point cloud. After a complete grid map is obtained, target clustering and visualization can be performed according to the grid occupation state, and the process mainly comprises the determination of a clustering strategy, the determination of a clustering threshold value and the selection of visualized content.
As shown in fig. 2, in this embodiment, in the grid map establishing process, the size and the resolution of the grid map are set, and the position of the vehicle in the grid is combined, so that the grid region, that is, the forward detection distance, the backward detection distance, and the lateral detection distance, can be determined, and parameters can be flexibly set for different scenes to balance the detection effect and the efficiency. For the laser radar points under the vehicle body coordinates, one part of points are noise points or abnormal points, screening needs to be carried out through a noise point removing module, and the noise points are placed into grids for non-noise points. For the attribute of each grid, calculating the height difference between the highest point and the lowest point in the grid at the near place, if the height difference is greater than a certain threshold value, determining the grid as a target grid, otherwise, determining the grid as not the target grid, and not clustering; and if the absolute height, namely the highest point in the grid is higher than a threshold, the grid is determined as a target grid, otherwise, the grid is determined as a non-target grid.
As shown in fig. 3, in this embodiment, after the grid map and the grid attributes are established, target clustering is performed, the clustering method adopts a flooding method, for a far target grid, a clustering threshold is large, a near target grid, and a clustering threshold is small, and the acquisition of a specific threshold needs to be determined according to an experimental effect under the large principle, so that over-segmentation and under-segmentation can be effectively reduced. Because the abnormal target needs to be deleted by determining the constraint condition of the abnormal target according to experience or experiment according to the target detection which only depends on the height information or the condition of inaccurate detection, after the process is finished, a series of detected targets and related information including size, speed and the like can be obtained. The information can be given to a downstream module and visualized, and the visualization module can select and output a two-dimensional convex hull, a two-dimensional rectangular frame and a three-dimensional convex hull and a three-dimensional rectangular frame.
As shown in fig. 5, the installation position relationship of the main radar 1 and the two side radars 2 is shown.
While exemplary embodiments are described above, it is not intended that these embodiments describe all possible forms encompassed by the claims. The words used in the specification are words of description rather than limitation, and it is understood that various changes may be made without departing from the spirit and scope of the disclosure. As previously described, features of the various embodiments may be combined to form further embodiments of the invention that may not be explicitly described or illustrated. While various embodiments may have been described as providing advantages or being advantageous over other embodiments or prior art implementations in terms of one or more desired characteristics, those of ordinary skill in the art will recognize that one or more features or characteristics may be compromised to achieve desired overall system attributes, depending on the particular application and implementation. These attributes may include, but are not limited to, cost, strength, durability, life cycle cost, appearance, size, manufacturability, functional robustness, and the like. As such, embodiments described as less desirable in one or more characteristics than other embodiments or prior art implementations are outside the scope of the present disclosure and may be desirable for particular applications.

Claims (9)

1. An environment sensing system method for intelligently driving a vehicle, comprising the steps of:
acquiring a point cloud UDP packet output by a laser radar, and accessing the point cloud UDP packet to an original point cloud driving module for processing to obtain a point cloud data format;
obtaining IMU information, obtaining a conversion relation between a radar position where each point cloud is located and an initial position by combining the IMU information, and converting all points of each frame to the initial radar position, namely performing point cloud distortion compensation;
combining the point cloud after distortion compensation with a radar calibration file to complete conversion from a radar coordinate system to a vehicle body coordinate system to obtain point cloud under the vehicle body coordinate system;
acquiring RTK positioning information and high-precision map information, and judging whether an RTK signal exists or not;
if the RTK signal exists, judging whether the vehicle (7) is positioned at the intersection; if the point cloud is located at the intersection, fitting the chain facility information to a road side boundary line through spline interpolation, judging whether the point cloud is in the lane line or not by combining with self-vehicle positioning information given by RTK (real time kinematic), if so, retaining and establishing a grid map, otherwise, carrying out point cloud filtering; if the grid map is not positioned at the intersection, reserving and establishing the grid map;
if no RTK signal exists, fitting a road side boundary line through spline interpolation according to a road isolation zone and a road edge detected by a self-vehicle sensor, judging whether the point cloud is in a lane line, if so, retaining and establishing a grid map, otherwise, filtering the point cloud;
after a complete grid map is obtained, target clustering and visualization are performed according to the grid occupation state.
2. The context awareness system method of the smart driving vehicle of claim 1, wherein: establishing a grid map, specifically:
setting the size and resolution of the grid map, and determining a grid region, namely a forward detection distance, a backward detection distance and a lateral detection distance, by combining the position of the vehicle (7) in the grid;
and judging whether the laser radar point under the vehicle body coordinate is a noise point, if so, not placing the grid, if not, placing the grid into the grid, and judging whether the laser radar point is a target grid, if so, clustering the target, and if not, clustering the target.
3. The context awareness system method of the smart driving vehicle of claim 2, wherein: for the near point cloud, calculating the height difference between the highest point and the lowest point in the grid, if the height difference is greater than a first preset threshold value, determining as a target grid, otherwise, determining as a non-target grid;
and judging whether the highest point in the grid is higher than a second preset threshold value or not for the far point cloud, if so, judging as a target grid, otherwise, judging as a non-target grid.
4. The context awareness system method of the smart driving vehicle of any one of claims 1 to 3, wherein: the specific steps of target clustering comprise:
and (3) clustering the targets by adopting a flooding method, judging whether the target grid meets preset constraint conditions, if not, deleting the targets, and if so, outputting and visualizing the targets.
5. The context awareness system method of the smart driving vehicle of claim 4, wherein: each udp packet is composed of a plurality of blocks, and each block contains data of all laser transmitters of the laser radar at the same time.
6. The context awareness system method of the smart driving vehicle of claim 5, wherein: and before the original point cloud driving module is accessed, loading a corresponding parameter file of the laser radar for correction.
7. An environment sensing system of an intelligent driving vehicle, characterized in that: the method comprises the following steps:
laser radars respectively arranged on the top of the vehicle (7) and the left side and the right side of the vehicle (7);
an RTK device (6) mounted on the vehicle for outputting centimeter-level high-precision vehicle position information;
an IMU (3) mounted on the vehicle for detecting inertial measurement information of the vehicle (7);
a high-precision map (5) for outputting road environment information;
and a controller (4), the controller (4) being connected to each of the lidar, the RTK device (6), the IMU (3) and the high precision map (5), respectively, the controller (4) being configured to perform the steps of the context awareness system method of the intelligent driving vehicle of any of claims 1 to 6.
8. A vehicle, characterized in that: the context awareness system for the smart-driven vehicle as recited in claim 7.
9. A storage medium, characterized by: stored with a computer readable program which when invoked is able to perform the steps of the context awareness system method of a smart driving vehicle according to any of claims 1 to 6.
CN202111100769.7A 2021-09-18 2021-09-18 Environment sensing system and method for intelligent driving vehicle, vehicle and storage medium Pending CN113734176A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111100769.7A CN113734176A (en) 2021-09-18 2021-09-18 Environment sensing system and method for intelligent driving vehicle, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111100769.7A CN113734176A (en) 2021-09-18 2021-09-18 Environment sensing system and method for intelligent driving vehicle, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN113734176A true CN113734176A (en) 2021-12-03

Family

ID=78740002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111100769.7A Pending CN113734176A (en) 2021-09-18 2021-09-18 Environment sensing system and method for intelligent driving vehicle, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN113734176A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114877883A (en) * 2022-03-22 2022-08-09 武汉大学 Vehicle positioning method and system considering communication delay under cooperative vehicle and road environment
CN116878487A (en) * 2023-09-07 2023-10-13 河北全道科技有限公司 Method and device for establishing automatic driving map, vehicle and server

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573272A (en) * 2017-12-15 2018-09-25 蔚来汽车有限公司 Track approximating method
CN111273305A (en) * 2020-02-18 2020-06-12 中国科学院合肥物质科学研究院 Multi-sensor fusion road extraction and indexing method based on global and local grid maps
CN111667545A (en) * 2020-05-07 2020-09-15 东软睿驰汽车技术(沈阳)有限公司 High-precision map generation method and device, electronic equipment and storage medium
CN111985322A (en) * 2020-07-14 2020-11-24 西安理工大学 Road environment element sensing method based on laser radar
CN112666535A (en) * 2021-01-12 2021-04-16 重庆长安汽车股份有限公司 Environment sensing method and system based on multi-radar data fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573272A (en) * 2017-12-15 2018-09-25 蔚来汽车有限公司 Track approximating method
CN111273305A (en) * 2020-02-18 2020-06-12 中国科学院合肥物质科学研究院 Multi-sensor fusion road extraction and indexing method based on global and local grid maps
CN111667545A (en) * 2020-05-07 2020-09-15 东软睿驰汽车技术(沈阳)有限公司 High-precision map generation method and device, electronic equipment and storage medium
CN111985322A (en) * 2020-07-14 2020-11-24 西安理工大学 Road environment element sensing method based on laser radar
CN112666535A (en) * 2021-01-12 2021-04-16 重庆长安汽车股份有限公司 Environment sensing method and system based on multi-radar data fusion

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114877883A (en) * 2022-03-22 2022-08-09 武汉大学 Vehicle positioning method and system considering communication delay under cooperative vehicle and road environment
CN114877883B (en) * 2022-03-22 2024-04-26 武汉大学 Vehicle positioning method and system considering communication delay in vehicle-road cooperative environment
CN116878487A (en) * 2023-09-07 2023-10-13 河北全道科技有限公司 Method and device for establishing automatic driving map, vehicle and server
CN116878487B (en) * 2023-09-07 2024-01-19 河北全道科技有限公司 Method and device for establishing automatic driving map, vehicle and server

Similar Documents

Publication Publication Date Title
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
US20220277557A1 (en) Target detection method based on fusion of vision, lidar, and millimeter wave radar
CN110531376B (en) Obstacle detection and tracking method for port unmanned vehicle
CN111192295B (en) Target detection and tracking method, apparatus, and computer-readable storage medium
EP3792660B1 (en) Method, apparatus and system for measuring distance
CN110705458B (en) Boundary detection method and device
CN112666535A (en) Environment sensing method and system based on multi-radar data fusion
CN110782465B (en) Ground segmentation method and device based on laser radar and storage medium
WO2021072710A1 (en) Point cloud fusion method and system for moving object, and computer storage medium
CN113734176A (en) Environment sensing system and method for intelligent driving vehicle, vehicle and storage medium
CN112560800B (en) Road edge detection method, device and storage medium
CN112927309B (en) Vehicle-mounted camera calibration method and device, vehicle-mounted camera and storage medium
CN113850102A (en) Vehicle-mounted vision detection method and system based on millimeter wave radar assistance
CN112789521B (en) Method and device for determining perception area, storage medium and vehicle
CN113763262A (en) Application method of vehicle body filtering technology in point cloud data of automatic driving mine truck
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system
CN115965847A (en) Three-dimensional target detection method and system based on multi-modal feature fusion under cross view angle
CN116129553A (en) Fusion sensing method and system based on multi-source vehicle-mounted equipment
US20230048222A1 (en) Information processing apparatus, sensing apparatus, mobile object, method for processing information, and information processing system
CN108416305B (en) Pose estimation method and device for continuous road segmentation object and terminal
CN112651405A (en) Target detection method and device
Jaspers et al. Fast and robust b-spline terrain estimation for off-road navigation with stereo vision
EP3330893A1 (en) Information processing device, information processing method, and carrier means
WO2021132227A1 (en) Information processing device, sensing device, moving body, and information processing method
CN111414848B (en) Full-class 3D obstacle detection method, system and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination