CN114521001A - Network bandwidth self-adaptive automatic driving characteristic data cooperative sensing system - Google Patents

Network bandwidth self-adaptive automatic driving characteristic data cooperative sensing system Download PDF

Info

Publication number
CN114521001A
CN114521001A CN202111270419.5A CN202111270419A CN114521001A CN 114521001 A CN114521001 A CN 114521001A CN 202111270419 A CN202111270419 A CN 202111270419A CN 114521001 A CN114521001 A CN 114521001A
Authority
CN
China
Prior art keywords
data
feature
perception
sensing
target detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111270419.5A
Other languages
Chinese (zh)
Inventor
李克秋
李玉鹏
周晓波
谢琦
张朝昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202111270419.5A priority Critical patent/CN114521001A/en
Publication of CN114521001A publication Critical patent/CN114521001A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/021Traffic management, e.g. flow control or congestion control in wireless networks with changing topologies, e.g. ad-hoc networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a network bandwidth self-adaptive cooperative sensing system for automatic driving characteristic data, which comprises the following steps: s1, the perception data sending unit transmits the acquired 3D perception point cloud data to a first target detection task module; s2, the first target detection task module processes the 3D point cloud data through a feature extraction layer and a feature segmentation layer to generate segmented feature perception data; s3, the second target detection task module generates registered feature sensing data for the received partial feature sensing data through a data registration layer according to coordinate steering and displacement calibration; s4, the second target detection task module fuses the registered feature perception data and the self perception data through the point cloud feature fusion layer to generate fused perception data; s5, the second target detection task module processes the fused perception data through a classification and regression layer and outputs target data; the invention can expand the vehicle sensing range and improve the sensing precision, and can adapt to the dynamic change of the wireless bandwidth.

Description

Network bandwidth self-adaptive automatic driving characteristic data cooperative sensing system
Technical Field
The invention mainly relates to the technical field of wireless communication of internet of vehicles, in particular to an automatic driving characteristic data cooperative sensing system with self-adaptive network bandwidth.
Background
It is important for an autonomous vehicle to be able to accurately sense the surrounding traffic environment in real time. At present, the perception of the surrounding environment of the automatic driving vehicle mainly depends on various advanced sensor devices equipped on the vehicle, such as a camera, a millimeter wave radar, a laser radar and the like. However, in any sensor device, there is a possibility that sensing fails due to factors such as damage of the sensor device, obstruction of road obstacles, limited sensing range of the sensor device, or influence of weather conditions, and thus, the sensing capability of the bicycle alone is far from meeting the extremely high safety requirement of automatic driving. With the development of wireless communication technology, it is proposed that sensing data can be shared between vehicles by using V2V wireless communication technology to expand the sensing range of vehicles, and we call this technology "cooperative sensing".
Existing work on collaborative awareness is mainly divided into three categories according to the type of data shared: based on raw data, based on feature data and based on cooperative sensing of the resulting data. For cooperative sensing based on original data, original sensor data which is not processed is shared among vehicles, information can be retained to the greatest extent by the method, more complete sensing data are provided for a vehicle at a receiving party, the improvement on the sensing capability of the vehicle at the receiving party is the greatest, and the data volume of the original data is large, so that great pressure is caused on a wireless channel; for cooperative sensing based on result data, detection results detected by a target detection model are shared among vehicles, and the data volume is very small, so that burden is not caused to wireless communication; for the advantages and disadvantages of the two modes, some work proposes a cooperative sensing method based on feature data, and a trade-off is made between data volume and sensing effect by sharing the partially processed feature data. However, the range of the sensing data transmitted by the existing characteristic data-based cooperative sensing mode is fixed, and the change of a wireless channel is not considered. In an actual environment, a wireless channel is changed from moment to moment, so that if data shared in a cooperative sensing process is unchanged, the data cannot adapt to the change of a network, transmission failure is caused, and the sensing of a vehicle to the surrounding environment is influenced.
Disclosure of Invention
Aiming at the problem that the conventional cooperative sensing method cannot adapt to the dynamic change of the bandwidth of a wireless channel, the invention provides a bandwidth-adaptive automatic driving characteristic data cooperative sensing method aiming at the typical application of 3D target detection in a sensing system. The invention can adapt to the dynamic change of wireless bandwidth and ensure the real-time property of target detection while enlarging the vehicle sensing range and improving the sensing precision.
The invention adopts the following technical scheme:
a cooperative sensing system of network bandwidth self-adaptive automatic driving characteristic data comprises a sensing data sending unit, a first target detection task module, a second target detection task module and a sensing data receiving unit; the sensing data sending unit transmits sensing characteristic data to the sensing data receiving unit through a V2V wireless data channel; the method comprises the following steps:
s1, the perception data sending unit transmits the acquired 3D perception point cloud data to a first target detection task module;
s2, the first target detection task module processes the 3D point cloud data through a feature extraction layer and a feature segmentation layer to generate segmented feature perception data;
s3, the second target detection task module generates registered feature sensing data for the received partial feature sensing data through a data registration layer according to coordinate steering and displacement calibration;
s4, the second target detection task module fuses the registered feature perception data and the self perception data through the point cloud feature fusion layer to generate fused perception data;
and S5, the second target detection task module processes the fused perception data through a classification and regression layer and outputs target data.
Further, the point cloud feature segmentation layer processes the 3D perception point cloud data through an angle segmentation method or a point density segmentation method so as to reduce the amount of perception feature data needing to be transmitted.
Further, the data registration layer generates a registered sensing data process through the calibration of the sensing characteristic data according to coordinate steering and displacement;
301. calculating and generating a rotation matrix R according to the perception characteristic data by the following formula;
R=Rzyaw)Rypitc h)Rxroll)
in the formula [ theta ]yaw,θpitc h,θrollRespectively are the difference values of a yaw angle, a pitch angle and a roll angle;
302. calibrating coordinate steering and displacement of the 3D perception point cloud data according to the following formula;
Figure BDA0003327838440000021
in the formula (X)s,Ys,Zs) And (X's,Y′s,Z′s) Respectively representing coordinate systems of the sender data before and after registration (delta d)x,Δdy,Δdz) Indicating the difference in displacement between the two vehicles.
Advantageous effects
1. According to the invention, through a set of end-to-end automatic driving characteristic data cooperative sensing framework, sharing and fusion of characteristic data layers can be supported, and the effects of enlarging a vehicle sensing range and improving vehicle sensing precision are achieved.
2. The invention provides two perception data segmentation schemes through a bandwidth self-adaptive data segmentation algorithm, and achieves optimal perception precision on the premise of ensuring real-time performance by self-adaptively adjusting perception data shared among vehicles. Meanwhile, the invention adjusts the data content shared among the vehicles according to the network condition by adopting a perception data segmentation mode through a V2V characteristic data cooperation perception strategy adapting to the change of a wireless channel, thereby expanding the perception range of the vehicles, improving the perception precision and ensuring the real-time property of target detection.
3. The method can be suitable for various 3D target detection models and support intelligent networked vehicles with different computing capabilities.
Drawings
FIG. 1 is a flow chart of point cloud based 3D object detection;
FIG. 2 is a flow diagram of a feature data collaborative awareness system;
FIG. 3 is a schematic diagram of feature data segmentation;
FIG. 4 is a schematic view of feature data registration;
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the following detailed discussion of the present invention will be made with reference to the accompanying drawings and examples, which are only illustrative and not limiting, and the scope of the present invention is not limited thereby. The invention aims at a first target detection task module and a second target detection task module in an automatic driving perception task, and provides a V2V bandwidth self-adaptive feature data cooperative perception system as shown in figure 1, wherein the cooperative perception system comprises a perception data sending unit, a first target detection task module, a second target detection task module and a perception data receiving unit as shown in figure 2; the sensing data sending unit transmits sensing characteristic data to the sensing data receiving unit through a V2V wireless data channel; the method comprises the following steps:
s1, the perception data sending unit transmits the acquired 3D perception point cloud data to a first target detection task module;
s2, the first target detection task module processes the 3D point cloud data through a feature extraction layer and a feature segmentation layer to generate segmented feature perception data;
s3, the second target detection task module generates registered feature sensing data for the received partial feature sensing data through a data registration layer according to coordinate steering and displacement calibration;
s4, the second target detection task module fuses the registered feature perception data and the self perception data through the point cloud feature fusion layer to generate fused perception data;
and S5, the second target detection task module processes the fused perception data through a classification and regression layer and outputs target data.
The invention is applied to the technical scheme that a vehicle at a sending party is a sensing data sending unit, a vehicle at a receiving party is a sensing data receiving unit, and sensing data is transmitted between the sending party and the receiving party through V2V wireless communication technology, such as DSRC, LTE-4G and the like. The perception data of the first target detection task unit in the automatic driving perception task is characteristic data obtained by extracting the characteristics of the point cloud data of the surrounding environment obtained by the laser radar through the neural network characteristic extraction layer, and the data volume of the characteristic data is far smaller than that of the original point cloud data due to the processing of the characteristic extraction layer, and meanwhile, the characteristic data has related information required by a target detection downstream task. Therefore, the characteristic data of environment perception are shared among vehicles, the load of intelligent Internet of vehicles network transmission can be reduced, and the target detection precision after perception data fusion can be guaranteed.
The vehicle at the sending end first calculates the ratio of the feature data to be transmitted according to the current channel condition, and divides the feature data according to the ratio, and the division can adopt a mode based on an angle or a mode based on point density, as shown in fig. 3, wherein alpha represents the proportion of the feature data which can be transmitted by the front vehicle, and 1-alpha represents the proportion of the feature data which can not be transmitted by the front vehicle. According to the importance of the environmental information to the vehicle running, the angle-based segmentation method preferentially transmits the feature data of the middle angle of the front vehicle view. Meanwhile, according to the properties of the laser radar sensor, the closer the distance, the larger the laser point density, and the higher the sensor precision, in a point density-based method, the characteristic data closer to a front vehicle are preferentially transmitted. It should be noted that each vehicle can independently perform the entire set of 3D object detection process, and the segmentation and transmission of data does not affect the object detection process performed by the vehicle itself.
The vehicle of the receiving party receives the characteristic data from the sending party, data registration is firstly carried out, the point cloud data collected by the laser radar is recorded in a four-tuple mode, the coordinate value of each point is based on the coordinate system of the respective laser radar, the characteristic data extracted through characteristics also retains the coordinate information of the original point cloud, and therefore the coordinate system of the characteristic data of the sending party and the coordinate system of the receiving party need to be registered. And fusing the registered feature data with the feature data of the receiver, and performing subsequent classification and regression on the fused feature data to obtain a final cooperative sensing result. The result is more adaptive to changes in network bandwidth than the original level, target level, and fully shared feature level collaboration awareness.
The practical application process of the invention is as follows:
step 1: the method comprises the steps that a vehicle at a sending party calculates the proportion of transmission of next frame of feature data under current bandwidth according to current channel conditions, the problem is modeled into a linear programming problem, the target is to enable the final detection precision of a cooperative perception target to be the highest under the condition that network conditions allow, as shown in formula (1), and the precondition is to meet the real-time performance of target detection, namely the frame rate is consistent with the sampling rate of laser radar.
maxαα·fs,m+fr,m, (1)
s.t.te2e≤Δt,
0≤α≤1,
Wherein, te2eThe end-to-end time delay of the whole cooperative sensing system is represented, namely the time from the point cloud data acquisition of the laser radar by the sender to the target detection result after the fusion of the vehicle of the receiver, te2eThe specific calculation method of (3) is shown in formula (2).
te2e=max{ts1+ts2+tfeature,tr1+tr2}+tr3 (2)
Different target detection task models perform different processing on radar point cloud data, and the sizes of extracted feature data are different, so that the transmission time delay of the feature data is different, and the target detection result precision after feature data fusion is different, as shown in table 1, the relevant data of four detection models are listed.
SECOND PointPillar PartA2 PV-RCNN
AveragePrecision(%) 63.68 55.28 70.30 67.49
AverageProcessingTime(ms) 19.46 13.50 41.34 64.74
RatiototheInputDate 0.5 0.3 0.5 0.5
TABLE 1
And 2, step: the sender vehicle divides the feature data according to the calculated data proportion alpha and shares the perception data at a corresponding stage, the data division can adopt two modes, namely angle-based division and point density-based division, as shown in fig. 3, in the case of angle division, because the view in front of the vehicle is relatively important, the feature data in front of the vehicle is preferentially transmitted; in the case of density division, since the closer the distance, the more dense the points are, the higher the sensing accuracy of the sensor is, the feature data closer to the preceding vehicle is preferentially transmitted.
And step 3: the receiver registers the received sensing data (as shown in fig. 2), and calculates a rotation matrix according to the data of the GPS and IMU of the two vehicles, and unifies the coordinate systems of the two vehicles, where the rotation matrix R is calculated by formula (3), where θ isyaw,θpitc h,θrollThe difference values of the yaw angle, the pitch angle and the roll angle are respectively.
R=Rzyaw)Rypitc h)Rxroll) (3)
Figure BDA0003327838440000051
Calibrating the steering and displacement of all coordinates of the sender data, wherein the calculation method is shown as formula (4), and formula (I) isXs,Ys,Zs) And (X's,Y′s,Z′s) Respectively representing coordinate systems of the sender data before and after registration (delta d)x,Δdy,Δdz) Indicating the difference in displacement between the two vehicles.
And 4, step 4: and fusing the calibrated characteristic data of the front vehicle with the characteristic data obtained by characteristic extraction of the front vehicle, wherein the fused specific mode is to perform maximum pooling operation on the characteristic data of the corresponding position according to the characteristics of the neural network. The feature data fusion expression is shown as formula (5)
Pf=max{Pr,P′s}(5)
In the formula Pf,Pr,P′sRespectively representing the fused feature data, the receiver feature data and the registered sender feature data.
And 5: and transmitting the fused feature data into a subsequent detection model, and carrying out classification and regression operation on the fused feature data to finally obtain a cooperative perception target detection result after the feature data of the front and rear vehicles are fused.
The present invention is not limited to the above-described embodiments. The foregoing description of the specific embodiments is intended to describe and illustrate the technical solutions of the present invention, and the above specific embodiments are merely illustrative and not restrictive. Those skilled in the art can make many changes and modifications to the invention without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (3)

1. A system for collaborative awareness of network bandwidth adaptive autopilot feature data, comprising: the cooperative sensing system comprises a sensing data sending unit, a first target detection task module, a second target detection task module and a sensing data receiving unit; the sensing data sending unit transmits sensing characteristic data to the sensing data receiving unit through a V2V wireless data channel; the method comprises the following steps:
s1, the perception data sending unit transmits the acquired 3D perception point cloud data to a first target detection task module;
s2, the first target detection task module processes the 3D point cloud data through a feature extraction layer and a feature segmentation layer to generate segmented feature perception data;
s3, the second target detection task module generates registered feature sensing data for the received partial feature sensing data through a data registration layer according to coordinate steering and displacement calibration;
s4, the second target detection task module fuses the registered feature perception data and the self perception data through the point cloud feature fusion layer to generate fused perception data;
and S5, the second target detection task module processes the fused perception data through a classification and regression layer and outputs target data.
2. The system of claim 1, wherein the system comprises: the point cloud feature segmentation layer processes the 3D perception point cloud feature data through an angle segmentation method or a point density segmentation method so as to reduce the amount of perception feature data needing to be transmitted.
3. The system of claim 1, wherein the system comprises: the data registration layer generates a registered sensing data process through the calibration of the sensing characteristic data according to coordinate steering and displacement;
301. calculating and generating a rotation matrix R according to the perception characteristic data by the following formula;
R=Rzyaw)Rypitch)Rxroll)
in the formula [ theta ]yaw,θpitch,θrollRespectively are the difference values of a yaw angle, a pitch angle and a roll angle;
302. calibrating coordinate steering and displacement of the 3D perception point cloud data according to the following formula;
Figure FDA0003327838430000011
in the formula (X)s,Ys,Zs) And (X's,Y′s,Z′s) Respectively representing coordinate systems of the sender data before and after registration (delta d)x,Δdy,Δdz) Indicating the difference in displacement between the two vehicles.
CN202111270419.5A 2021-10-29 2021-10-29 Network bandwidth self-adaptive automatic driving characteristic data cooperative sensing system Pending CN114521001A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111270419.5A CN114521001A (en) 2021-10-29 2021-10-29 Network bandwidth self-adaptive automatic driving characteristic data cooperative sensing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111270419.5A CN114521001A (en) 2021-10-29 2021-10-29 Network bandwidth self-adaptive automatic driving characteristic data cooperative sensing system

Publications (1)

Publication Number Publication Date
CN114521001A true CN114521001A (en) 2022-05-20

Family

ID=81594741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111270419.5A Pending CN114521001A (en) 2021-10-29 2021-10-29 Network bandwidth self-adaptive automatic driving characteristic data cooperative sensing system

Country Status (1)

Country Link
CN (1) CN114521001A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170124781A1 (en) * 2015-11-04 2017-05-04 Zoox, Inc. Calibration for autonomous vehicle operation
WO2017079321A1 (en) * 2015-11-04 2017-05-11 Zoox, Inc. Sensor-based object-detection optimization for autonomous vehicles
US20200160559A1 (en) * 2018-11-16 2020-05-21 Uatc, Llc Multi-Task Multi-Sensor Fusion for Three-Dimensional Object Detection
CN111554088A (en) * 2020-04-13 2020-08-18 重庆邮电大学 Multifunctional V2X intelligent roadside base station system
CN112650220A (en) * 2020-12-04 2021-04-13 东风汽车集团有限公司 Automatic vehicle driving method, vehicle-mounted controller and system
CN113490178A (en) * 2021-06-18 2021-10-08 天津大学 Intelligent networking vehicle multistage cooperative sensing system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170124781A1 (en) * 2015-11-04 2017-05-04 Zoox, Inc. Calibration for autonomous vehicle operation
WO2017079321A1 (en) * 2015-11-04 2017-05-11 Zoox, Inc. Sensor-based object-detection optimization for autonomous vehicles
US20200160559A1 (en) * 2018-11-16 2020-05-21 Uatc, Llc Multi-Task Multi-Sensor Fusion for Three-Dimensional Object Detection
CN111554088A (en) * 2020-04-13 2020-08-18 重庆邮电大学 Multifunctional V2X intelligent roadside base station system
CN112650220A (en) * 2020-12-04 2021-04-13 东风汽车集团有限公司 Automatic vehicle driving method, vehicle-mounted controller and system
CN113490178A (en) * 2021-06-18 2021-10-08 天津大学 Intelligent networking vehicle multistage cooperative sensing system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
朱向雷等: ""自动驾驶智能***测试研究综述"", 《软件学报》, 5 July 2021 (2021-07-05) *
胡劲文;郑博尹;王策;赵春晖;侯晓磊;潘泉;徐钊;: "基于多传感器融合的智能车在野外环境中的障碍物检测研究(英文)", FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, no. 05, 3 May 2020 (2020-05-03) *

Similar Documents

Publication Publication Date Title
US20210342637A1 (en) Generating ground truth for machine learning from time series elements
US10035508B2 (en) Device for signalling objects to a navigation module of a vehicle equipped with this device
US10552982B2 (en) Method for automatically establishing extrinsic parameters of a camera of a vehicle
KR20190103265A (en) Detection of additional cluster vehicles in or near cluster member vehicles
CN111391856A (en) System and method for detecting front curve of automobile adaptive cruise
CN113490178B (en) Intelligent networking vehicle multistage cooperative sensing system
WO2020215254A1 (en) Lane line map maintenance method, electronic device and storage medium
CN111457933B (en) Method and device for determining static and dynamic information of lane class
CN109900490B (en) Vehicle motion state detection method and system based on autonomous and cooperative sensors
CN115578709B (en) Feature level cooperative perception fusion method and system for vehicle-road cooperation
WO2022115987A1 (en) Method and system for automatic driving data collection and closed-loop management
CN111278006B (en) V2X-based perception information reliability verification method and device, controller and automobile
CN115187737A (en) Semantic map construction method based on laser and vision fusion
CN114037707A (en) Network bandwidth self-adaptive automatic driving point cloud data cooperative sensing system
CN115223131A (en) Adaptive cruise following target vehicle detection method and device and automobile
CN114521001A (en) Network bandwidth self-adaptive automatic driving characteristic data cooperative sensing system
Li et al. Lane keeping control based on model predictive control under region of interest prediction considering vehicle motion states
CN111753901A (en) Data fusion method, device and system and computer equipment
CN115379408A (en) Scene perception-based V2X multi-sensor fusion method and device
CN114274957A (en) Vehicle self-adaptive cruise control method and system
CN113859257A (en) Surrounding vehicle driving intention identification method based on gray entropy weight and lateral ellipse threshold
CN112309156A (en) Traffic light passing strategy based on 5G hierarchical decision
CN115082562A (en) External parameter calibration method, device, equipment, server and vehicle-mounted computing equipment
US20180265085A1 (en) Dynamically after vehicle following distance using probe data
WO2023036032A1 (en) Lane line detection method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination