WO2023207016A1 - 一种基于数字孪生云控平台的自动驾驶测试***和方法 - Google Patents

一种基于数字孪生云控平台的自动驾驶测试***和方法 Download PDF

Info

Publication number
WO2023207016A1
WO2023207016A1 PCT/CN2022/129299 CN2022129299W WO2023207016A1 WO 2023207016 A1 WO2023207016 A1 WO 2023207016A1 CN 2022129299 W CN2022129299 W CN 2022129299W WO 2023207016 A1 WO2023207016 A1 WO 2023207016A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
test
real
data
virtual
Prior art date
Application number
PCT/CN2022/129299
Other languages
English (en)
French (fr)
Inventor
赵祥模
刘占文
李蕊芬
李超
王超
肖方伟
房颜明
王洋
宋明哲
孟虎
徐志刚
王润民
林杉
程娟茹
范锦
李文倩
闵海根
蒋渊德
Original Assignee
长安大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 长安大学 filed Critical 长安大学
Publication of WO2023207016A1 publication Critical patent/WO2023207016A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0208Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the configuration of the monitoring system
    • G05B23/0213Modular or universal configuration of the monitoring system, e.g. monitoring system having modules that may be combined to build monitoring program; monitoring system that can be applied to legacy systems; adaptable monitoring system; using different communication protocols

Definitions

  • the invention relates to the field of automatic driving technology, and specifically relates to an automatic driving testing system and method based on a digital twin cloud control platform.
  • the present invention proposes an automatic driving testing system and method based on a digital twin cloud control platform.
  • the present invention proposes an automatic driving test system and method based on a digital twin cloud control platform, which realizes virtual and real combined automatic driving algorithm function verification, scene simulation testing and vehicle-related integrated testing, and overcomes the existing Real vehicle road testing and software simulation testing technology are insufficient to improve testing efficiency.
  • the present invention provides the following technical solutions.
  • An autonomous driving test system based on a digital twin cloud control platform including:
  • Autonomous driving simulation software builds virtual dynamic test scenarios based on vehicle simulation model data and real vehicle test tasks
  • the data center processes real vehicle test tasks and virtual dynamic test scenarios into point clouds, images and ADC sensing data; and converts real vehicle status and control data into vehicle simulation model data and sends it to the autonomous driving simulation software;
  • Autonomous driving algorithm platform which includes:
  • Perception fusion algorithm platform used to fuse point cloud, image and ADC perception data, and generate traffic target location and category information
  • the decision-making planning algorithm platform outputs the traffic target trajectory based on the traffic target location and category information
  • the motion control algorithm platform calculates vehicle control signals based on the traffic target trajectory
  • the real vehicle actuator is installed on the test vehicle. According to the vehicle control signal, the test vehicle performs acceleration, braking and steering actions, and generates real vehicle status and control data;
  • the dynamic twin platform renders and displays real-time dynamic twin effects based on virtual dynamic test scene data and real-time vehicle simulation model data.
  • the construction of the virtual dynamic test scenario data includes:
  • the automatic driving simulation software generates virtual dynamic test scenarios required for development and testing, and establishes multiple virtual sensor models in the simulation software.
  • the target object information detected by the virtual sensors constitutes virtual dynamic test scenario data, which is injected into the automatic driving system through the bus. in the driving algorithm platform.
  • the real vehicle actuator and the data center perform wireless communication data transmission through a local area network
  • the data center uses wired communication to convert the actual vehicle status and control data obtained by the test vehicle into vehicle simulation model data and transmits it back to the autonomous driving simulation software.
  • the actual vehicle test tasks include actual vehicle collision avoidance warning and actual vehicle avoidance of pedestrians.
  • the actual vehicle status and control data include pose data and GPS data.
  • An autonomous driving testing method based on a digital twin cloud control platform including the following steps:
  • OPENDRIVE data is collected and produced to obtain high-precision maps, simulate the traffic environment through simulation software, and create test cases; calibrate the mapping relationship between the real map and the virtual traffic environment, and calibrate the mapping between the tested vehicle and the simulated vehicle model. relation;
  • Map-related road scene and traffic scene information is collected through sensors installed on the roadside, and processed into point clouds, images and ADC sensing data;
  • perception fusion processes point cloud, image and ADC perception data to obtain perception results, generate full-area perception results, and generate traffic target location and category information;
  • the test vehicle is controlled by the real vehicle actuator for testing, and the vehicle status information is obtained;
  • the built-in high-precision road network map and objective environmental scene model are loaded and rendered, and the real-time dynamic twin effect is presented in real time through the visual interface.
  • the testing by controlling the test vehicle through a real vehicle actuator according to the vehicle control signal includes the following steps:
  • the built-in high-precision road network map and objective environmental scene model are loaded and rendered, and the parallel twin of the test field is presented in real time through the visual interface.
  • the testing by controlling the test vehicle through a real vehicle actuator according to the vehicle control signal includes the following steps:
  • the OPENDRIVE data is OPENDRIVE data adapted to scenarios related to the handling and stability square, and the vehicle control signal is sent to the test vehicle located in the handling and stability square to obtain vehicle status information;
  • the built-in high-precision road network map and objective environmental scene model are loaded and rendered, and the twin of the real intersection is presented in real time through the visual interface.
  • it also includes:
  • the motion information of the simulated vehicle model at each timestamp is used as the analysis basis for the performance test of the controller under test. After the case is completed, the algorithm is analyzed.
  • the present invention proposes an automatic driving test system and method based on a digital twin cloud control platform, which can automatically generate large batches of simulation sample data and intelligent true value calibration for various virtual experimental conditions and scenarios, and is efficient, fast, and Accurately generate training data samples required in the field of autonomous vehicle driving; it can integrate high-precision map data road mining equipment and combine the expression of key scene elements and automatic reconstruction methods to quickly build three-dimensional reconstruction scenes that meet the testing requirements of different dimensions to help users improve The use value of self-collected scene data; it can integrate simulation test tools, V2X communication equipment, real test vehicles and other functional units to carry out digital twin autonomous driving tests based on virtual simulation and real environments, and has the ability to carry out virtual complex scenarios under limited resource conditions Autonomous driving real vehicle test verification capabilities.
  • the invention not only has the advantages of real vehicle road testing, but also integrates the advantages of simulation technology by integrating real vehicles in the virtual traffic environment, realizing the integration of real test dynamics, traffic flow simulation and complex virtual traffic scenes. Organically combined, it can continuously adapt to the rapidly changing testing needs of autonomous driving technology.
  • Figure 1 is a flow chart of the digital twin autonomous driving test method according to the embodiment of the present invention.
  • Figure 2 is a rendering of the digital twin according to the embodiment of the present invention.
  • Figure 3 is the testing process of the Class A software-in-the-loop vehicle automatic driving algorithm according to the embodiment of the present invention
  • Figure 4 is the virtual and real combined test and parallel twin process of Class B autonomous driving of the embodiment of the present invention.
  • Figure 5 is a class C vehicle automatic driving adaptability test and twin process according to the embodiment of the present invention.
  • Figure 6 shows test items and test scenarios according to the embodiment of the present invention.
  • Figure 7 is a simulation test road scene rendering according to the embodiment of the present invention.
  • Figure 8 is a layout diagram of an intersection and a stability plaza according to an embodiment of the present invention.
  • the invention provides an automatic driving testing system and method based on a digital twin cloud control platform. As shown in Figure 1-8. As shown in Figure 1, the present invention integrates two functional units of the digital twin cloud control platform and the test vehicle.
  • the digital twin cloud control platform includes dynamic twin platform, autonomous driving simulation software and data center.
  • Autonomous driving simulation software constructs virtual dynamic test scene data based on real vehicle test tasks issued by testers (real vehicle collision avoidance warning and real vehicle avoidance of pedestrians); the data center processes data based on real vehicle test tasks and virtual dynamic test scenarios Sensing data for point clouds, images and ADC (Analog to digital converter); dynamic twin platform; through the data center, the real vehicle status and control data (including pose data and GPS data) generated by the real vehicle actuator are converted into The vehicle simulation model data is sent to the autonomous driving simulation software; the dynamic twin platform renders and displays the real-time dynamic twin effect based on the virtual dynamic test scene data and real-time vehicle simulation model data sent by the autonomous driving simulation software.
  • the rendering effect is shown in Figure 2 .
  • the digital twin cloud control platform is the host computer module of the test management software. It provides multi-account management, authority management, import of requirements, preparation of test plans on the platform, and can realize the management of specific test scenarios through integrated simulation software. As well as test record playback, test analysis, report generation and other functions. Users can create and edit test scenarios on the digital twin cloud control platform. They can directly access the software-in-the-loop and model-in-the-loop methods for testing. They can also use wireless methods on the actual vehicle side to add data to the digital twin cloud on the digital twin cloud. The test plan and test scenarios formulated on the control platform are downloaded to the vehicle-mounted terminal for vehicle-in-the-loop testing.
  • the test vehicle side includes: an autonomous driving algorithm platform, which is installed on the test vehicle; the autonomous driving algorithm platform includes: a perception fusion algorithm platform, which is used to fuse point clouds, images and ADC perception data sent by the data processing platform, and generate traffic targets Position and category information; the decision planning algorithm platform outputs the traffic target trajectory based on the generated traffic target location and category information; the motion control algorithm platform calculates the vehicle control signal based on the output traffic target trajectory;
  • the real vehicle actuator is mounted on the test vehicle, receives vehicle control signals, the test vehicle performs acceleration, braking and steering actions, and generates real vehicle status and control data;
  • the test vehicle end and the digital twin cloud control platform end carry out wireless communication data transmission through the local area network, receive the test task configuration and complete the vehicle in-the-loop test.
  • the data center uses wired communication to convert the actual vehicle status and control data (including pose data and GPS data) obtained by the test vehicle into vehicle simulation model data and transmits it back to the autonomous driving simulation software to achieve "from virtual to real" Virtual-real interactive closed-loop automated testing of sensor injection and “real-to-virtual” control feedback.
  • the main functions of the invention are:
  • the invention can automatically generate large batches of simulation sample data and intelligently calibrate true values for various virtual experimental working conditions and scenarios, and efficiently, quickly and accurately generate training data samples required in the field of automatic driving of automobiles;
  • This invention can integrate high-precision map data road collection equipment and combine the expression of key scene elements and the automatic reconstruction method to quickly construct a three-dimensional reconstruction scene that meets the test requirements of different dimensions, helping users improve the use value of self-collection scene data;
  • This invention can integrate functional units such as simulation test tools, V2X communication equipment, real test vehicles, etc., to carry out digital twin automatic driving tests based on virtual simulation and real environments, and has the ability to carry out automatic driving real vehicle test verification in virtual complex scenarios under limited resource conditions. ability.
  • Step 1 Collect and produce OPENDRIVE data. Obtain the data through the collection vehicle and produce it into a high-precision map. The simulation software simulates the traffic environment;
  • Step 2 The simulator imports the prepared map into the simulation software and creates a test case
  • Step 3 In the test case, virtual cameras, lidar and other sensors are added to the roadside to collect detection data within the scanning range.
  • the sensors obtain map-related road scene and traffic scene information and pass it to the ADAS/AD algorithm platform through ROS;
  • Step 4 According to the test requirements, access the autonomous driving algorithm under test.
  • the perception fusion algorithm processes point clouds, images, true values and other information, performs perception fusion, calculates perception results, and generates global perception results.
  • the output position, category and other information are passed
  • the topic communication method is passed to the decision-making planning algorithm.
  • the decision-making planning algorithm outputs the vehicle trajectory after calculation. It is passed to the motion control algorithm through the topic communication method. After processing and calculation by the motion control algorithm, the control signal is finally output;
  • Step 5 The vehicle model obtains the control signal output by the controller under test for the controller sensor signal through ROS.
  • the vehicle model moves and returns vehicle status information such as GPS, vehicle position and attitude, and the simulation software obtains it through the API interface to proceed to the next moment. algorithm testing;
  • Step 6 The motion information of the simulated vehicle model at each timestamp is used as the analysis basis for the performance test of the controller under test. After the case is completed, the algorithm is analyzed.
  • Step 1 According to the test requirements, collect and produce OPENDRIVE data. Obtain the data through the collection vehicle and produce it into a high-precision map.
  • the simulation software simulates the simulated traffic environment, calibrates the mapping relationship between the real map and the virtual traffic environment, and calibrates the vehicle under test and the simulated vehicle.
  • the mapping relationship of the model
  • Step 2 The simulator imports the prepared map into the simulation software and creates a test case
  • Step 3 The simulation software transfers the map-related road scene and traffic scene information to the ADAS/AD algorithm platform through ROS;
  • Step 4 According to the test requirements, access the autonomous driving algorithm under test.
  • the perceptual fusion algorithm processes point clouds, images, true values and other information to perform perceptual fusion.
  • the output position, category and other information are passed to the decision-making planning algorithm through topic communication, and the decision is made.
  • the planning algorithm outputs the vehicle trajectory after calculation, and passes it to the motion control algorithm through topic communication. After processing and calculation by the motion control algorithm, the control signal is finally output;
  • Step 5 Send the control signal to the driverless bus located on the road of the test site.
  • the industrial computer of the actual vehicle moves and returns vehicle status information such as GPS, vehicle position and attitude, etc.
  • the simulation software obtains it through the API interface and performs Algorithm testing at the next moment;
  • Step 6 After receiving vehicle status and other information, the simulation software loads the built-in high-precision road network map and objective environmental scene model, renders it through the cloud control platform, and presents the parallel twin of the test field in real time on the visual interface.
  • Step 1 According to the test requirements, collect and produce OPENDRIVE data that adapts to the scenarios related to the handling and stability square.
  • the data is obtained through the collection vehicle and produced into a high-precision map.
  • the simulation software simulates the traffic environment and calibrates the mapping between the real map and the virtual traffic environment. Relationship, calibrate the mapping relationship between the vehicle under test and the simulated vehicle model;
  • Step 2 The simulator imports the prepared map into the simulation software and creates a test case
  • Step 3 The simulation software transfers the map-related road scene and traffic scene information to the ADAS/AD algorithm platform through ROS;
  • Step 4 According to the test requirements, access the autonomous driving algorithm under test.
  • the perceptual fusion algorithm processes point clouds, images, true values and other information to perform perceptual fusion.
  • the output position, category and other information are passed to the decision-making planning algorithm through topic communication, and the decision is made.
  • the planning algorithm outputs the vehicle trajectory after calculation, and passes it to the motion control algorithm through topic communication. After processing and calculation by the motion control algorithm, the control signal is finally output;
  • Step 5 Send the control signal to the driverless bus located in the Stability Square.
  • the real vehicle industrial computer moves and returns vehicle status information such as GPS, vehicle position and attitude, etc.
  • the simulation software obtains it through the API interface. Carry out algorithm testing at the next moment;
  • Step 6 After receiving vehicle status and other information, the simulation software loads the built-in high-precision road network map and objective environmental scene model, renders it through the cloud control platform, and presents a twin of the real intersection on the visual interface.
  • the present invention uses simulation software to generate virtual scenes required for development and testing, and establishes multiple virtual sensor models in the simulation software, and injects target object information detected by the virtual sensors into the automatic driving controller on the real vehicle through the bus for information processing. Fusion and control decision-making, the controller sends the decision-making acceleration, braking and steering instructions to the real vehicle actuator, and the real vehicle then feeds back the vehicle motion status information on the vehicle bus to the virtual scene through the roadside communication system to complete the vehicle Position synchronization, thereby achieving closed-loop real-time simulation of the entire system.
  • the obtained target object information is injected into the autonomous driving algorithm platform through the wireless network for information fusion and decision-making control.
  • the autonomous driving algorithm platform sends the decision-making acceleration, braking, steering and other control signals to the real vehicle actuator.
  • the actual vehicle operation At the same time, the vehicle information collection system collects real-time real vehicle motion status information and feeds it back to the virtual scene to complete the synchronization of vehicle positions in the virtual and real environments, thereby realizing closed-loop real-time simulation testing of the entire digital twin system.
  • the virtual simulation platform can generate the original signal or target signal of the sensor according to the needs of the test scenario in the test task management system, and inject it into the autonomous driving prototype before the data perception and fusion module or the decision-making layer algorithm.
  • the virtual simulation platform generates original signals based on different models such as camera models, millimeter-wave radar models, lidar models, and V2X models, and injects various basic test data in the original signals into the vehicle controller.
  • the original signal of the camera is the video stream
  • the original signal of the lidar is the laser point cloud data;
  • Target signal injection Based on the idealized true value sensing model, the virtual simulation platform injects the following target signals detected within a certain range of the prototype vehicle into the vehicle controller through CAN or other suitable methods;
  • a classic intersection model is used to conduct intelligent vehicle testing for the test scenario.
  • the test in this article will be based on the autonomous driving test site of Chang'an University, with the Stability Plaza as the physical basis, and a virtual and real smart vehicle test will be carried out based on urban intersection scenarios.
  • a two-way eight-lane intersection in a certain area of Xi'an City is used as a model.
  • the typical traffic conditions at intersections in urban traffic flow were extracted.
  • Test samples were set up for starting, stopping, going straight, and turning scenarios in the intersection. At the same time, the situation of pedestrians crossing the road was taken into consideration, and corresponding scenarios were set up. Carry out the test and simulate the test road scene effect as shown in Figure 7.
  • the selected test items and test scenarios are shown in Figure 6.
  • the present invention uses the minimum safe distance at the warning time and the TTC at the warning time as the judgment conditions for whether to issue a warning, and uses the minimum safe distance model between vehicles to illustrate the possibility of collision during vehicle braking.
  • the time when the application issues an early warning must ensure that there is sufficient time or distance between the test vehicle and the target vehicle for the smart vehicle to react and brake to a safe speed to avoid a collision.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种基于数字孪生云控平台的自动驾驶测试***和方法,借助仿真软件生成所需开发测试的高保真虚拟场景,并在仿真软件中建立多种虚拟传感器模型,将虚拟传感器探测到的目标物体信息通过总线注入到实车上的自动驾驶控制器中,并进行信息融合与控制决策,控制器将决策后的加速、制动以及转向指令发送给实车执行器。实车将车辆总线上的整车运动状态信息反馈给虚拟场景来完成车辆位置的同步,实现整个***的闭环实时测试验证,并通过数字孪生云控平台实时展示场景相关孪生效果。从而实现虚实结合自动驾驶算法功能验证、场景仿真测试与整车相关的集成测试,并且克服现有实车道路测试和软件仿真测试技术不足,提高测试效率。

Description

一种基于数字孪生云控平台的自动驾驶测试***和方法 技术领域
本发明涉及自动驾驶技术领域,具体涉及一种基于数字孪生云控平台的自动驾驶测试***和方法。
背景技术
随着市场需求的明确以及自动驾驶技术的发展,很多传统车辆制造企业转型或设立自动驾驶部门,以及一些自动驾驶初创企业,都研发了很多自动驾驶相关的控制器和传感器,并深度融合集成应用L2到L4级的智能汽车中。这些***赋能智能汽车,但应对各种长尾效应的***鲁棒性与安全性成为制约高级别智能汽车投入使用的瓶颈。因此需要对自动驾驶车辆进行充分的测试验证。
受复杂场景构造难度、测试成本和安全风险等因素的影响,仅依靠实车道路测试难以满足智能车辆的研究及测试需求。除实车道路测试外,软件仿真测试也是智能汽车测试验证的重要技术手段,具有成本低、周期短、效率高等优势,但是其场景建模及传感器仿真大多是建立在理想条件下,没有考虑到传感器硬件或软件自身造成的噪点问题,以及雨雪、灰尘等干扰雷达传感性能的环境因素,使得仿真结果的置信度存在较大的局限性。
为此,本发明提出一种基于数字孪生云控平台的自动驾驶测试***和方法。
发明内容
为解决上述问题,本发明提出了一种基于数字孪生云控平台的自动驾驶 测试***和方法,实现了虚实结合自动驾驶算法功能验证、场景仿真测试与整车相关的集成测试,并且克服现有实车道路测试和软件仿真测试技术不足,提高测试效率。
本发明提供了如下的技术方案。
一种基于数字孪生云控平台的自动驾驶测试***,包括:
自动驾驶仿真软件,根据车辆仿真模型数据和实车测试任务,构建虚拟动态测试场景;
数据中台,将实车测试任务和虚拟动态测试场景处理为点云、图像和ADC感知数据;及将实车状态和控制数据转换为车辆仿真模型数据,发送给自动驾驶仿真软件;
自动驾驶算法平台,其包括:
感知融合算法平台,用于融合点云、图像和ADC感知数据,并生成交通目标位置和类别信息;
决策规划算法平台,根据交通目标位置和类别信息,输出交通目标轨迹;
运动控制算法平台,根据交通目标轨迹,计算车辆控制信号;
实车执行器,搭载于测试车辆上,根据车辆控制信号,测试车辆执行加速、制动以及转向动作,生成实车状态和控制数据;
以及,
动态孪生平台,根据虚拟动态测试场景数据和实时车辆仿真模型数据,渲染并展示实时动态孪生效果。
优选地,所述虚拟动态测试场景数据的构建,包括:
所述自动驾驶仿真软件生成所需开发测试的虚拟动态测试场景,并在仿 真软件中建立多种虚拟传感器模型,将虚拟传感器探测到的目标物体信息构成虚拟动态测试场景数据,通过总线注入到自动驾驶算法平台中。
优选地,所述实车执行器与数据中台通过局域网进行无线通信数据传输;
所述数据中台利用有线通信方式将测试车辆端获取的实车状态和控制数据转换为车辆仿真模型数据并回传至自动驾驶仿真软件。
优选地,所述实车测试任务包括实车避撞预警和实车避让行人。
优选地,所述实车状态和控制数据包括位姿数据和GPS数据。
一种基于数字孪生云控平台的自动驾驶测试方法,包括以下步骤:
根据实车测试任务,采集并制作OPENDRIVE数据,获得高精度地图,通过仿真软件仿真模拟交通环境,创建测试案例;标定真实地图与虚拟交通环境的映射关系,标定被测车辆与仿真车辆模型的映射关系;
通过路侧设置的传感器采集地图相关道路场景和交通场景信息,并处理为点云、图像和ADC感知数据;
根据实车测试任务,感知融合处理点云、图像和ADC感知数据,获得感知结果,并生成全域感知结果,并生成交通目标位置和类别信息;
根据生成的交通目标位置和类别信息规划交通目标轨迹,并通过运动控制算法处理计算,输出车辆控制信号;
根据车辆控制信号通过实车执行器控制测试车辆进行测试,获得车辆状态信息;
接收车辆状态信息后,加载内置高精度路网地图及客观环境场景模型,并进行渲染,通过可视化界面实时呈现实时动态孪生效果。
优选地,所述根据车辆控制信号通过实车执行器控制测试车辆进行测试, 包括以下步骤:
将车辆控制信号发至位于测试场道路的测试车辆,获得车辆状态信息;
接收车辆状态信息后,加载内置高精度路网地图及客观环境场景模型,并进行渲染,通过可视化界面实时呈现测试场的平行孪生。
优选地,所述根据车辆控制信号通过实车执行器控制测试车辆进行测试,包括以下步骤:
所述OPENDRIVE数据为适配操稳性广场相关场景的OPENDRIVE数据,将车辆控制信号发至位于操稳性广场的测试车辆,获得车辆状态信息;
接收车辆状态信息后,加载内置高精度路网地图及客观环境场景模型,并进行渲染,通过可视化界面实时呈现真实路口的孪生。
优选地,还包括:
通过车辆控制信号控制仿真车辆模型运动,获得车辆状态信息;
仿真车辆模型在每一个时间戳的运动信息作为被测控制器的性能测试的分析依据,案例结束之后对算法进行分析。
本发明的有益效果:
本发明提出了一种基于数字孪生云控平台的自动驾驶测试***和方法,能够对各种虚拟实验工况和场景进行大批量仿真样本数据的自动生成与智能化真值标定,高效、快速、准确地生成汽车自动驾驶领域所需要的训练数据样本;可集成高精度地图数据路采设备并结合场景关键要素表达与自动重构方法快速构建出符合不同维度测试需求的三维重建场景,帮助用户提升自采场景数据的使用价值;可集成仿真测试工具、V2X通信设备、真实测试车辆等功能单元,开展基于虚拟仿真以及真实环境的数字孪生自动驾驶测试,具备 在有限资源条件下开展虚拟复杂场景的自动驾驶实车测试验证能力。
本发明不仅具备了实车道路测试的优势,同时还通过在虚拟交通环境中集成真实车辆的方式来整合了仿真技术的优势,实现了将真实测试动力学、交通流仿真与复杂虚拟交通场景的有机结合,能够不断适应自动驾驶技术快速变化的测试需求。
附图说明
图1是本发明实施例的数字孪生自动驾驶测试方法流程框图;
图2是本发明实施例数字孪生呈现效果图;
图3是本发明实施例A类软件在环整车自动驾驶算法测试流程;
图4是本发明实施例B类整车自动驾驶虚实结合测试及平行孪生流程;
图5是本发明实施例C类整车自动驾驶适应性测试及孪生流程;
图6是本发明实施例的测试项目及测试场景;
图7是本发明实施例的仿真测试道路场景效果图;
图8是本发明实施例的路口及操稳性广场布局图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。
实施例
本发明的一种基于数字孪生云控平台的自动驾驶测试***和方法。如图 1-8所示。如图1所示本发明集成了数字孪生云控平台端和测试车辆两个功能单元。
数字孪生云控平台端包括动态孪生平台、自动驾驶仿真软件和数据中台。自动驾驶仿真软件,根据测试者下发实车测试任务(实车避撞预警和实车避让行人),构建虚拟动态测试场景数据;数据中台,根据实车测试任务和虚拟动态测试场景数据处理为点云、图像和ADC(Analog to digital converter)感知数据;动态孪生平台;通过所述数据中台将实车执行器生成的实车状态和控制数据(包括位姿数据和GPS数据)转换为车辆仿真模型数据,发送给自动驾驶仿真软件;动态孪生平台根据自动驾驶仿真软件发送的虚拟动态测试场景数据和实时车辆仿真模型数据,进行渲染并展示实时动态孪生效果,呈现效果如图2所示。
数字孪生云控平台端为测试管理软件的上位机模块,在平台上提供用户多账号管理,权限管理,需求的导入,测试计划的编制,并可通过集成的仿真软件实现具体测试场景的管理,以及测试记录回放、测试分析、报告生成等功能。用户能够在数字孪生云控平台端进行测试场景的创建、编辑,能够直接通过接入软件在环、模型在环的方式进行测试,也可以在实车端通过无线的方式,将在数字孪生云控平台端制定的测试计划和测试场景下载到车载端,进行整车在环的测试。
测试车辆端,包括:自动驾驶算法平台,搭载于测试车辆上;自动驾驶算法平台包括:感知融合算法平台,用于融合处理数据中台发送的点云、图像和ADC感知数据,并生成交通目标位置和类别信息;决策规划算法平台,根据生成的交通目标位置和类别信息输出交通目标轨迹;运动控制算法平台, 根据输出的交通目标轨迹计算车辆控制信号;
以及,实车执行器,搭载于测试车辆上,接收车辆控制信号,测试车辆执行加速、制动以及转向动作,生成实车状态和控制数据;
测试车辆端与数字孪生云控平台端通过局域网进行无线通信数据传输,并接收测试任务配置并完成整车在环测试。数据中台利用有线通信方式将测试车辆端获取的实车状态和控制数据(包括位姿数据和GPS数据)转换为车辆仿真模型数据并回传至自动驾驶仿真软件,实现“从虚到实”传感注入和“从实到虚”控制反馈的虚实互动闭环自动化测试。
本发明的主要功能为:
本发明能够对各种虚拟实验工况和场景进行大批量仿真样本数据的自动生成与智能化真值标定,高效、快速、准确地生成汽车自动驾驶领域所需要的训练数据样本;
本发明可集成高精度地图数据路采设备并结合场景关键要素表达与自动重构方法快速构建出符合不同维度测试需求的三维重建场景,帮助用户提升自采场景数据的使用价值;
本发明可集成仿真测试工具、V2X通信设备、真实测试车辆等功能单元,开展基于虚拟仿真以及真实环境的数字孪生自动驾驶测试,具备在有限资源条件下开展虚拟复杂场景的自动驾驶实车测试验证能力。
实施例1
如图3所示,SIL软件在环自动驾驶算法测试及其可视化:
步骤一:采集、制作OPENDRIVE数据,通过采集车获得数据并制作成为高精度地图,仿真软件仿真模拟交通环境;
步骤二:仿真机将制作好的地图导入仿真软件,创建测试案例;
步骤三:测试案例中在路侧加入虚拟摄像头、激光雷达等传感器,采集扫描范围内的检测数据,传感器获得地图相关道路场景、交通场景信息,通过ROS传递给ADAS/AD算法平台;
步骤四:根据测试需求,接入被测自动驾驶算法,感知融合算法处理点云、图像、真值等信息,进行感知融合,计算感知结果,并生成全域感知结果,输出位置、类别等信息通过topic通信方式传递给决策规划算法,决策规划算法经过计算后输出车辆轨迹,通过topic通信方式传递给运动控制算法,经过运动控制算法处理计算后,最后输出控制信号;
步骤五:车辆模型通过ROS获取被测控制器针对控制器传感器信号输出的控制信号,车辆模型进行运动,返回GPS、车辆位置与姿态等车辆状态信息,仿真软件通过API接口获取,进行下一个时刻的算法测试;
步骤六:仿真车辆模型在每一个时间戳的运动信息作为被测控制器的性能测试的分析依据,案例结束之后对算法进行分析。
实施例2
如图4所示,整车自动驾驶虚实结合测试及平行孪生:
步骤一:根据测试需求,采集、制作OPENDRIVE数据,通过采集车获得数据并制作成为高精度地图,仿真软件仿真模拟交通环境,标定真实地图与虚拟交通环境的映射关系,标定被测车辆与仿真车辆模型的映射关系;
步骤二:仿真机将制作好的地图导入仿真软件,创建测试案例;
步骤三:仿真软件将地图相关道路场景、交通场景信息,通过ROS传递给ADAS/AD算法平台;
步骤四:根据测试需求,接入被测自动驾驶算法,感知融合算法处理点云、图像、真值等信息,进行感知融合,输出位置、类别等信息通过topic通信方式传递给决策规划算法,决策规划算法经过计算后输出车辆轨迹,通过topic通信方式传递给运动控制算法,经过运动控制算法处理计算后,最后输出控制信号;
步骤五:将控制信号下发至位于测试场道路的无人驾驶巴士,实车工控机接收控制信号之后进行运动,返回GPS、车辆位置与姿态等车辆状态信息,仿真软件通过API接口获取,进行下一个时刻的算法测试;
步骤六:仿真软件接收车辆状态等信息后,加载内置高精度路网地图及客观环境场景模型,通过云控平台进行渲染,在可视化界面实时呈现测试场的平行孪生。
实施例3
如图5所示,整车自动驾驶适应性测试及孪生流程:
步骤一:根据测试需求,采集、制作适配操稳性广场相关场景的OPENDRIVE数据,通过采集车获得数据并制作成为高精度地图,仿真软件仿真模拟交通环境,标定真实地图与虚拟交通环境的映射关系,标定被测车辆与仿真车辆模型的映射关系;
步骤二:仿真机将制作好的地图导入仿真软件,创建测试案例;
步骤三:仿真软件将地图相关道路场景、交通场景信息,通过ROS传递给ADAS/AD算法平台;
步骤四:根据测试需求,接入被测自动驾驶算法,感知融合算法处理点云、图像、真值等信息,进行感知融合,输出位置、类别等信息通过topic 通信方式传递给决策规划算法,决策规划算法经过计算后输出车辆轨迹,通过topic通信方式传递给运动控制算法,经过运动控制算法处理计算后,最后输出控制信号;
步骤五:将控制信号下发至位于操稳性广场的无人驾驶巴士,实车工控机接收控制信号之后进行运动,返回GPS、车辆位置与姿态等车辆状态信息,仿真软件通过API接口获取,进行下一个时刻的算法测试;
步骤六:仿真软件接收车辆状态等信息后,加载内置高精度路网地图及客观环境场景模型,通过云控平台进行渲染,在可视化界面呈现真实路口的孪生。
实施例4
本发明借助仿真软件生成所需开发测试的虚拟场景,并在仿真软件中建立多种虚拟传感器模型,将虚拟传感器探测到的目标物体信息通过总线注入到实车上的自动驾驶控制器中进行信息融合与控制决策,控制器将决策后的加速、制动以及转向指令发送给实车执行器,实车再将车辆总线上的整车运动状态信息通过路侧通信***反馈给虚拟场景来完成车辆位置的同步,进而实现整个***的闭环实时仿真。
为了实现本发明***需求,我们构建基于整车的数字孪生测试***,充分结合XIL在环仿真测试与实车测试的优势,通过仿真软件生成虚拟测试场景,将虚拟的场景信息注入给自动驾驶算法控制器中,进而控制实际被测车辆的运动状态。
在虚拟仿真***中设置、生成所需开发测试的虚拟场景,并在仿真软件中部署所需要的虚拟传感器模型(支持摄像头、毫米波雷达、激光雷达、超声 波雷达),虚拟传感器在仿真环境中探测到的目标物体信息通过无线网络注入到自动驾驶算法平台中进行信息融合与决策控制,自动驾驶算法平台将决策后的加速、制动以及转向等控制信号发送给实车执行器,实车运行的同时车辆信息采集***将实时的实车运动状态信息采集并反馈给虚拟场景来完成虚实环境中车辆位置的同步,从而实现整个数字孪生***的闭环实时仿真测试。通过提供标准的传感器数据格式接口及车辆状态信息接收接口,实现较为通用的注入测试需求。
虚拟仿真平台能够在测试任务管理***中根据测试场景的需要生成传感器的原始信号或目标信号,分别在数据感知与融合模块之前或者决策层算法之前注入给自动驾驶样车。
具体如下:
(1)原始信号注入:虚拟仿真平台根据摄像头模型、毫米波雷达模型、激光雷达模型、V2X模型等不同模型生成原始信号,把原始信号中的各类基本测试数据注入给整车控制器。其中摄像头的原始信号为视频流,激光雷达的原始信号为激光点云数据;
(2)目标信号注入:虚拟仿真平台根据理想化的真值传感模型,把样车一定范围内探测到的以下目标信号通过CAN或其他合适的方式注入给整车控制器;
(3)机动车,行人,非机动车,交通牌,交通灯和车道线等;
(4)上述目标的位置、朝向、包围盒、速度、加速度和角速度等。
为验证本发明的有效性,采用经典的十字路口模型为测试场景展开智能车辆测试。本文中测试将基于长安大学自动驾驶测试试验场,以操稳性广场 为实体基础,针对城市交叉路口场景展开进行虚实结合的智能车辆测试,以西安市某地区的双向八车道十字路口为模型,提取了典型的在城市交通流中交叉路口的交通情况,对于交叉路口中的起步、停止、直行、转向场景分别设置测试样例进行测试,同时考虑了行人横穿马路的情况,设置了对应场景进行测试,仿真测试道路场景效果如图7所示。
选用的测试项目及测试场景如图6所示,本发明采用预警时刻的最小安全距离与预警时刻的TTC作为是否发出预警的判断条件,采用车辆间最小安全距离模型说明车辆制动过程存在碰撞可能性的应用场景中,应用发出预警的时刻必须保证测试车辆与目标车辆之间有足够的时间或距离用于智能车辆做出反应以及制动至安全车速,以避免碰撞的发生。
以长安大学交通部认定“自动驾驶封闭场地测试基地”道路及操稳性广场为基础,以西安某地某交叉路口构建平行孪生虚拟场景,分析五个测试项目,以真实智能车辆为测试对象,在接受虚拟交通流场景后,在真实场景中进行测试和决策,实现虚实结合智能汽车测试,能够克服现有虚拟测试与场地测试技术不足等问题,提高测试效率。
以上仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (9)

  1. 一种基于数字孪生云控平台的自动驾驶测试***,其特征在于,包括:
    自动驾驶仿真软件,根据车辆仿真模型数据和实车测试任务,构建虚拟动态测试场景;
    数据中台,将实车测试任务和虚拟动态测试场景处理为点云、图像和ADC感知数据;及将实车状态和控制数据转换为车辆仿真模型数据,发送给自动驾驶仿真软件;
    自动驾驶算法平台,其包括:
    感知融合算法平台,用于融合点云、图像和ADC感知数据,并生成交通目标位置和类别信息;
    决策规划算法平台,根据交通目标位置和类别信息,输出交通目标轨迹;
    运动控制算法平台,根据交通目标轨迹,计算车辆控制信号;
    实车执行器,搭载于测试车辆上,根据车辆控制信号,测试车辆执行加速、制动以及转向动作,生成实车状态和控制数据;
    以及,
    动态孪生平台,根据虚拟动态测试场景数据和实时车辆仿真模型数据,渲染并展示实时动态孪生效果。
  2. 根据权利要求1所述的基于数字孪生云控平台的自动驾驶测试***,其特征在于,所述虚拟动态测试场景数据的构建,包括:
    所述自动驾驶仿真软件生成所需开发测试的虚拟动态测试场景,并在仿真软件中建立多种虚拟传感器模型,将虚拟传感器探测到的目标物体信息构成虚拟动态测试场景数据,通过总线注入到自动驾驶算法平台中。
  3. 根据权利要求1所述的基于数字孪生云控平台的自动驾驶测试***,其特征在于,所述实车执行器与数据中台通过局域网进行无线通信数据传输;
    所述数据中台利用有线通信方式将测试车辆端获取的实车状态和控制数据转换为车辆仿真模型数据并回传至自动驾驶仿真软件。
  4. 根据权利要求1所述的基于数字孪生云控平台的自动驾驶测试***,其特征在于,所述实车测试任务包括实车避撞预警和实车避让行人。
  5. 根据权利要求1所述的基于数字孪生云控平台的自动驾驶测试***,其特征在于,所述实车状态和控制数据包括位姿数据和GPS数据。
  6. 一种基于数字孪生云控平台的自动驾驶测试方法,其特征在于,包括以下步骤:
    根据实车测试任务,采集并制作OPENDRIVE数据,获得高精度地图,通过仿真软件仿真模拟交通环境,创建测试案例;标定真实地图与虚拟交通环境的映射关系,标定被测车辆与仿真车辆模型的映射关系;
    通过路侧设置的传感器采集地图相关道路场景和交通场景信息,并处理为点云、图像和ADC感知数据;
    根据实车测试任务,感知融合处理点云、图像和ADC感知数据,获得感知结果,并生成全域感知结果,并生成交通目标位置和类别信息;
    根据生成的交通目标位置和类别信息规划交通目标轨迹,并通过运动控制算法处理计算,输出车辆控制信号;
    根据车辆控制信号通过实车执行器控制测试车辆进行测试,获得车辆状态信息;
    接收车辆状态信息后,加载内置高精度路网地图及客观环境场景模型,并进行渲染,通过可视化界面实时呈现实时动态孪生效果。
  7. 根据权利要求6所述的基于数字孪生云控平台的自动驾驶测试方法,其特征在于,所述根据车辆控制信号通过实车执行器控制测试车辆进行测试, 包括以下步骤:
    将车辆控制信号发至位于测试场道路的测试车辆,获得车辆状态信息;
    接收车辆状态信息后,加载内置高精度路网地图及客观环境场景模型,并进行渲染,通过可视化界面实时呈现测试场的平行孪生。
  8. 根据权利要求6所述的基于数字孪生云控平台的自动驾驶测试方法,其特征在于,所述根据车辆控制信号通过实车执行器控制测试车辆进行测试,包括以下步骤:
    所述OPENDRIVE数据为适配操稳性广场相关场景的OPENDRIVE数据,将车辆控制信号发至位于操稳性广场的测试车辆,获得车辆状态信息;
    接收车辆状态信息后,加载内置高精度路网地图及客观环境场景模型,并进行渲染,通过可视化界面实时呈现真实路口的孪生。
  9. 根据权利要求6所述的基于数字孪生云控平台的自动驾驶测试方法,其特征在于,还包括:
    通过车辆控制信号控制仿真车辆模型运动,获得车辆状态信息;
    仿真车辆模型在每一个时间戳的运动信息作为被测控制器的性能测试的分析依据,案例结束之后对算法进行分析。
PCT/CN2022/129299 2022-04-29 2022-11-02 一种基于数字孪生云控平台的自动驾驶测试***和方法 WO2023207016A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210466439.8 2022-04-29
CN202210466439.8A CN114879631A (zh) 2022-04-29 2022-04-29 一种基于数字孪生云控平台的自动驾驶测试***和方法

Publications (1)

Publication Number Publication Date
WO2023207016A1 true WO2023207016A1 (zh) 2023-11-02

Family

ID=82673567

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/129299 WO2023207016A1 (zh) 2022-04-29 2022-11-02 一种基于数字孪生云控平台的自动驾驶测试***和方法

Country Status (2)

Country Link
CN (1) CN114879631A (zh)
WO (1) WO2023207016A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117238038A (zh) * 2023-11-15 2023-12-15 山东海博科技信息***股份有限公司 一种基于数字孪生技术的数据监测***
CN117765796A (zh) * 2024-02-22 2024-03-26 深圳风向标教育资源股份有限公司 自动驾驶教学***、方法及装置
CN117828899A (zh) * 2024-03-04 2024-04-05 沈阳展播智慧科技有限公司 结合三维车身建模的道路环境感知方法及装置
CN117871117A (zh) * 2024-01-05 2024-04-12 武汉万曦智能科技有限公司 场车运行状态的测试方法及***
CN118095018A (zh) * 2024-04-25 2024-05-28 陕西重型汽车有限公司 一种考虑振动频率的重型汽车驾驶室疲劳耐久分析方法

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114879631A (zh) * 2022-04-29 2022-08-09 长安大学 一种基于数字孪生云控平台的自动驾驶测试***和方法
CN115061897B (zh) * 2022-08-16 2022-12-09 北京京深深向科技有限公司 用于车联网的数据生成方法、装置及电子设备、存储介质
CN115230672B (zh) * 2022-08-30 2023-10-20 重庆长安汽车股份有限公司 基于数字孪生的防抱死刹车测试方法、装置、设备及存储介质
CN115782782A (zh) * 2022-10-10 2023-03-14 中汽创智科技有限公司 智能网联汽车动力学行为的镜像数字孪生装置及方法
CN115933586A (zh) * 2022-10-27 2023-04-07 中汽创智科技有限公司 一种车辆测试方法、装置、电子设备及存储介质
CN115659701B (zh) * 2022-12-09 2023-03-10 中汽数据(天津)有限公司 车路协同v2x应用场景的验证方法、设备和存储介质
CN115840404B (zh) * 2022-12-21 2023-11-03 浙江大学 一种基于自动驾驶专用路网和数字孪生地图的云控自动驾驶***
CN116081488B (zh) * 2022-12-21 2023-07-04 中国矿业大学 一种场景自适应单轨吊运输机器人无人驾驶控制方法
CN116434539B (zh) * 2023-02-28 2024-03-15 东南大学 基于数字孪生的极端雨水天气下高速公路车速预警方法
CN116089313A (zh) * 2023-03-07 2023-05-09 北京路凯智行科技有限公司 车云交互功能测试***、方法及存储介质
CN116149970B (zh) * 2023-04-06 2023-06-23 苏州简诺科技有限公司 基于数字孪生技术的软件性能效率测试与验证方法及装置
CN116306029B (zh) * 2023-05-16 2023-10-27 南京航空航天大学 基于Carla与ROS的自动驾驶仿真测试方法
CN117094182B (zh) * 2023-10-19 2024-03-12 中汽研(天津)汽车工程研究院有限公司 V2v交通场景构建方法及v2x虚实融合测试***
CN117809498A (zh) * 2024-01-09 2024-04-02 北京千乘科技有限公司 虚实交互多维孪生的投影路网***

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011203663A (ja) * 2010-03-26 2011-10-13 Mitsubishi Precision Co Ltd 自動移動制御機能を備えた模擬運転装置及び自動移動制御方法
CN112015164A (zh) * 2020-08-24 2020-12-01 苏州星越智能科技有限公司 基于数字孪生的智能网联汽车复杂测试场景实现***
CN112925291A (zh) * 2021-01-22 2021-06-08 大连理工大学 一种基于相机暗箱的数字孪生自动驾驶测试方法
CN112924185A (zh) * 2021-01-22 2021-06-08 大连理工大学 一种基于数字孪生虚实交互技术的人机共驾测试方法
CN113642242A (zh) * 2021-08-17 2021-11-12 上海电气集团智能交通科技有限公司 一种基于数字孪生的智能公交车交通仿真平台
CN113777952A (zh) * 2021-08-19 2021-12-10 北京航空航天大学 一种实车与虚拟车交互映射的自动驾驶仿真测试方法
CN114387844A (zh) * 2022-01-17 2022-04-22 北京国家新能源汽车技术创新中心有限公司 一种基于自动驾驶虚实融合测试技术的教学平台
CN114879631A (zh) * 2022-04-29 2022-08-09 长安大学 一种基于数字孪生云控平台的自动驾驶测试***和方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011203663A (ja) * 2010-03-26 2011-10-13 Mitsubishi Precision Co Ltd 自動移動制御機能を備えた模擬運転装置及び自動移動制御方法
CN112015164A (zh) * 2020-08-24 2020-12-01 苏州星越智能科技有限公司 基于数字孪生的智能网联汽车复杂测试场景实现***
CN112925291A (zh) * 2021-01-22 2021-06-08 大连理工大学 一种基于相机暗箱的数字孪生自动驾驶测试方法
CN112924185A (zh) * 2021-01-22 2021-06-08 大连理工大学 一种基于数字孪生虚实交互技术的人机共驾测试方法
CN113642242A (zh) * 2021-08-17 2021-11-12 上海电气集团智能交通科技有限公司 一种基于数字孪生的智能公交车交通仿真平台
CN113777952A (zh) * 2021-08-19 2021-12-10 北京航空航天大学 一种实车与虚拟车交互映射的自动驾驶仿真测试方法
CN114387844A (zh) * 2022-01-17 2022-04-22 北京国家新能源汽车技术创新中心有限公司 一种基于自动驾驶虚实融合测试技术的教学平台
CN114879631A (zh) * 2022-04-29 2022-08-09 长安大学 一种基于数字孪生云控平台的自动驾驶测试***和方法

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117238038A (zh) * 2023-11-15 2023-12-15 山东海博科技信息***股份有限公司 一种基于数字孪生技术的数据监测***
CN117238038B (zh) * 2023-11-15 2024-01-26 山东海博科技信息***股份有限公司 一种基于数字孪生技术的数据监测***
CN117871117A (zh) * 2024-01-05 2024-04-12 武汉万曦智能科技有限公司 场车运行状态的测试方法及***
CN117871117B (zh) * 2024-01-05 2024-06-07 武汉万曦智能科技有限公司 场车运行状态的测试方法及***
CN117765796A (zh) * 2024-02-22 2024-03-26 深圳风向标教育资源股份有限公司 自动驾驶教学***、方法及装置
CN117765796B (zh) * 2024-02-22 2024-05-10 深圳风向标教育资源股份有限公司 自动驾驶教学***、方法及装置
CN117828899A (zh) * 2024-03-04 2024-04-05 沈阳展播智慧科技有限公司 结合三维车身建模的道路环境感知方法及装置
CN117828899B (zh) * 2024-03-04 2024-05-07 沈阳展播智慧科技有限公司 结合三维车身建模的道路环境感知方法及装置
CN118095018A (zh) * 2024-04-25 2024-05-28 陕西重型汽车有限公司 一种考虑振动频率的重型汽车驾驶室疲劳耐久分析方法

Also Published As

Publication number Publication date
CN114879631A (zh) 2022-08-09

Similar Documents

Publication Publication Date Title
WO2023207016A1 (zh) 一种基于数字孪生云控平台的自动驾驶测试***和方法
Chen et al. A novel integrated simulation and testing platform for self-driving cars with hardware in the loop
CN113032285B (zh) 一种高精地图测试方法、装置、电子设备及存储介质
CN111859618B (zh) 多端在环的虚实结合交通综合场景仿真测试***及方法
CN111897305B (zh) 一种基于自动驾驶的数据处理方法、装置、设备及介质
CN109211575B (zh) 无人驾驶汽车及其场地测试方法、装置及可读介质
Zhou et al. A framework for virtual testing of ADAS
CN108681264A (zh) 一种智能车辆数字化仿真测试装置
CN107403038A (zh) 一种智能汽车虚拟快速测试方法
Galko et al. Vehicle-Hardware-In-The-Loop system for ADAS prototyping and validation
CN113419518B (zh) 一种基于vts的vil测试平台
CN111223354A (zh) 无人小车、基于ar和ai技术的无人车实训平台及方法
WO2023028858A1 (zh) 一种测试方法和***
Wang et al. Simulation and application of cooperative driving sense systems using prescan software
CN111240224A (zh) 一种车辆自动驾驶技术的多功能仿真模拟***
Guvenc et al. Simulation Environment for Safety Assessment of CEAV Deployment in Linden
Cantas et al. Customized co-simulation environment for autonomous driving algorithm development and evaluation
Bai et al. Cyber mobility mirror for enabling cooperative driving automation in mixed traffic: A co-simulation platform
CN114387844A (zh) 一种基于自动驾驶虚实融合测试技术的教学平台
Pechinger et al. Benefit of smart infrastructure on urban automated driving-using an av testing framework
Yang et al. CAVTest: A closed connected and automated vehicles test field of Chang’an university in China
Zhao et al. Corroborative evaluation of the real-world energy saving potentials of inforich eco-autonomous driving (iread) system
Miura et al. Converting driving scenario framework for testing self-driving systems
Song et al. Automatic Driving Joint Simulation Technology and Platform Design
Rossi et al. Vehicle hardware-in-the-loop system for ADAS virtual testing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22939833

Country of ref document: EP

Kind code of ref document: A1