CN115454861A - Automatic driving simulation scene construction method and device - Google Patents

Automatic driving simulation scene construction method and device Download PDF

Info

Publication number
CN115454861A
CN115454861A CN202211139477.9A CN202211139477A CN115454861A CN 115454861 A CN115454861 A CN 115454861A CN 202211139477 A CN202211139477 A CN 202211139477A CN 115454861 A CN115454861 A CN 115454861A
Authority
CN
China
Prior art keywords
node
action
trigger
vehicle
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211139477.9A
Other languages
Chinese (zh)
Inventor
周辰霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211139477.9A priority Critical patent/CN115454861A/en
Publication of CN115454861A publication Critical patent/CN115454861A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4498Finite state machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure provides a method and a device for constructing an automatic driving simulation scene, relates to the field of automatic driving, and particularly relates to the technical field of automatic driving scene simulation. The implementation scheme is as follows: obtaining a first vehicle and vehicle properties of the first vehicle, the vehicle properties being indicative of at least a motion state of the first vehicle; the method comprises the steps of obtaining at least one traffic element and an element attribute and a state machine of each traffic element in the at least one traffic element, wherein the element attribute at least indicates the position of the corresponding traffic element, and the state machine indicates the corresponding traffic element to execute corresponding action under a preset trigger condition; and constructing a simulation scene based on the vehicle attributes and the element attributes and the state machine of each of the at least one traffic element.

Description

Automatic driving simulation scene construction method and device
Technical Field
The present disclosure relates to the field of autopilot technology, and in particular, to a method and an apparatus for constructing an autopilot simulation scenario, an electronic device, a computer-readable storage medium, and a computer program product.
Background
As the most important data asset in an autopilot simulation system, the "autopilot test scenario case" will provide the most direct input to the simulation test. Research shows that the automatic driving vehicle needs to accumulate test courses of 'hundreds of millions of kilometers' level to prove the safety of the system, and the mode of more efficiently operating the mileage is to operate in a simulation system in a large batch and concurrently, because the simulation system also needs 'hundreds of millions of' scene test cases.
How to improve the construction efficiency of the automatic driving simulation scene is a topic which is always concerned by researchers in the automatic driving industry.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, the problems mentioned in this section should not be considered as having been acknowledged in any prior art.
Disclosure of Invention
The disclosure provides an automatic driving simulation scene construction method, an automatic driving simulation scene construction device, an electronic device, a computer readable storage medium and a computer program product.
According to an aspect of the present disclosure, there is provided an automatic driving simulation scene construction method, including: obtaining a first vehicle and vehicle properties of the first vehicle, the vehicle properties being indicative of at least a state of motion of the first vehicle; obtaining at least one traffic element and an element attribute and a state machine of each of the at least one traffic element, the element attribute at least indicating a position of the corresponding traffic element, the state machine indicating the corresponding traffic element to perform a corresponding action under a preset trigger condition; and constructing a simulation scene based on the vehicle attributes and the element attributes and state machines of each of the at least one traffic element.
According to another aspect of the present disclosure, there is provided an automatic driving simulation scene constructing apparatus including: a first vehicle obtaining unit configured to obtain a first vehicle and a vehicle property of the first vehicle, the vehicle property being indicative of at least a motion state of the first vehicle; the traffic element acquisition unit is configured to obtain at least one traffic element and an element attribute and a state machine of each of the at least one traffic element, wherein the element attribute at least indicates a position of the corresponding traffic element, and the state machine indicates that the corresponding traffic element executes a corresponding action under a preset trigger condition; and a construction unit configured to construct a simulation scenario based on the vehicle attributes and the element attributes and state machines of each of the at least one traffic element.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method according to embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to the embodiments of the present disclosure.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program, wherein the computer program realizes the method according to embodiments of the present disclosure when executed by a processor.
According to one or more embodiments of the present disclosure, the efficiency of constructing an automatic driving simulation scene may be improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, according to an embodiment of the present disclosure;
FIG. 2 shows a flow diagram of an autopilot simulation scenario construction method according to an embodiment of the disclosure;
fig. 3A and 3B illustrate frame diagrams of a traffic element running state machine when constructing a scene in an automatic driving simulation scene construction method according to an embodiment of the present disclosure;
FIG. 4 shows a flow diagram of a process of obtaining at least one traffic element and element attributes and state machines for each of the at least one traffic element in an autopilot simulation scenario construction method according to an embodiment of the disclosure;
FIG. 5 shows a flow diagram of a process of determining a first traffic element from a set of preset traffic elements in an autopilot simulation scenario construction method according to an embodiment of the disclosure;
FIG. 6 is a schematic diagram illustrating an interface displaying a preset set of traffic elements in an autopilot simulation scenario construction method that may implement an embodiment of the present disclosure;
fig. 7 is a flowchart illustrating a process of obtaining a node combination formed between at least one trigger node in a set of trigger nodes corresponding to a first traffic element and at least one action node in a set of action nodes in an automatic driving simulation scene construction method according to an embodiment of the disclosure;
FIG. 8 shows a flow diagram of an autopilot simulation scenario construction method according to an embodiment of the disclosure;
FIG. 9 shows a flow diagram of an autopilot simulation scenario construction method according to an embodiment of the disclosure;
FIG. 10 shows a flowchart of a process of generalizing a built scene in an autopilot simulation scene building method according to an embodiment of the present disclosure;
FIG. 11 shows a block diagram of an autopilot simulation scenario construction apparatus in accordance with an embodiment of the present disclosure;
FIG. 12 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", and the like to describe various elements is not intended to limit the positional relationship, the temporal relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented in accordance with embodiments of the present disclosure. Referring to fig. 1, the system 100 includes a motor vehicle 110, a server 120, and one or more communication networks 130 coupling the motor vehicle 110 to the server 120.
In embodiments of the present disclosure, motor vehicle 110 may include a computing device and/or be configured to perform a method in accordance with embodiments of the present disclosure.
The server 120 may run one or more services or software applications that enable the autopilot simulation scenario construction method. In some embodiments, the server 120 may also provide other services or software applications, which may include non-virtual environments and virtual environments. In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof, which may be executed by one or more processors. A user of motor vehicle 110 may, in turn, utilize one or more client applications to interact with server 120 to take advantage of the services provided by these components. It should be understood that a variety of different system configurations are possible, which may differ from system 100. Accordingly, fig. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture involving virtualization (e.g., one or more flexible pools of logical storage that may be virtualized to maintain virtual storage for the server). In various embodiments, the server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above, as well as any commercially available server operating systems. The server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, and the like.
In some implementations, server 120 can include one or more applications to analyze and consolidate data feeds and/or event updates received from motor vehicle 110. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of motor vehicle 110.
Network 130 may be any type of network known to those skilled in the art that may support data communications using any of a variety of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, the one or more networks 130 may be a satellite communication network, a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a blockchain network, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (including, for example, bluetooth, wiFi), and/or any combination of these and other networks.
The system 100 may also include one or more databases 150. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 150 may be used to store information such as audio files and video files. The data store 150 may reside in various locations. For example, the data store used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data store 150 may be of different types. In certain embodiments, the data store used by the server 120 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve data to and from the database in response to the command.
In some embodiments, one or more of the databases 150 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key-value stores, object stores, or regular stores supported by a file system.
Motor vehicle 110 may include sensors 111 for sensing the surrounding environment. The sensors 111 may include one or more of the following sensors: visual cameras, infrared cameras, ultrasonic sensors, millimeter wave radar, and laser radar (LiDAR). Different sensors may provide different detection accuracies and ranges. The camera may be mounted in front of, behind, or otherwise on the vehicle. The visual camera may capture conditions inside and outside the vehicle in real time and present to the driver and/or passengers. In addition, by analyzing the picture captured by the visual camera, information such as traffic light indication, intersection situation, other vehicle running state, and the like can be acquired. The infrared camera can capture objects under night vision conditions. The ultrasonic sensors can be arranged around the vehicle and used for measuring the distance between an object outside the vehicle and the vehicle by utilizing the characteristics of strong ultrasonic directionality and the like. The millimeter wave radar may be installed in front of, behind, or other positions of the vehicle for measuring the distance of an object outside the vehicle from the vehicle using the characteristics of electromagnetic waves. The lidar may be mounted in front of, behind, or otherwise of the vehicle for detecting object edges, shape information, and thus object identification and tracking. The radar apparatus can also measure a speed variation of the vehicle and the moving object due to the doppler effect.
Motor vehicle 110 may also include a communication device 112. The communication device 112 may include a satellite positioning module capable of receiving satellite positioning signals (e.g., beidou, GPS, GLONASS, and GALILEO) from the satellites 141 and generating coordinates based on these signals. The communication device 112 may also comprise modules for communicating with a mobile communication base station 142, and the mobile communication network may implement any suitable communication technology, such as current or evolving wireless communication technologies (e.g. 5G technologies) like GSM/GPRS, CDMA, LTE, etc. The communication device 112 may also have a Vehicle-to-Vehicle (V2X) networking or Vehicle-to-anything (V2X) module configured to enable, for example, vehicle-to-Vehicle (V2V) communication with other vehicles 143 and Vehicle-to-Infrastructure (V2I) communication with Infrastructure 144. Further, the communication device 112 may also have a module configured to communicate with a user terminal 145 (including but not limited to a smartphone, tablet, or wearable device such as a watch), for example, via wireless local area network using IEEE802.11 standards or bluetooth. Motor vehicle 110 may also access server 120 via network 130 using communication device 112.
Motor vehicle 110 may also include a control device 113. The control device 113 may include a processor, such as a Central Processing Unit (CPU) or a Graphics Processing Unit (GPU), or other special purpose processor, etc., in communication with various types of computer-readable storage devices or media. The control device 113 may include an autopilot system for automatically controlling various actuators in the vehicle. The autopilot system is configured to control a powertrain, steering system, and braking system, etc., of a motor vehicle 110 (not shown) via a plurality of actuators in response to inputs from a plurality of sensors 111 or other input devices to control acceleration, steering, and braking, respectively, without human intervention or limited human intervention. Part of the processing functions of the control device 113 may be implemented by cloud computing. For example, some processing may be performed using an onboard processor while other processing may be performed using the computing resources in the cloud. The control device 113 may be configured to perform a method according to the present disclosure. Furthermore, the control apparatus 113 may be implemented as one example of a computing device on the motor vehicle side (client) according to the present disclosure.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with this disclosure.
In the related art, the construction of an automatic driving simulation scene is realized by writing codes using a text editor. For example, in the process of editing a scene based on opencererio, the editing of an XML file is realized through a simulation tool and a content editor, and the automatic construction of a simulation scene is realized. However, in the process of text editing, because the content of text editing is more and the editing is tedious, the efficiency of constructing the simulation scene is often low.
According to an aspect of the present disclosure, there is provided an automatic driving simulation scene construction method. Referring to fig. 2, an autopilot simulation scenario construction method according to some embodiments of the present disclosure includes:
step S210: obtaining a first vehicle and vehicle properties of the first vehicle, the vehicle properties being indicative of at least a state of motion of the first vehicle;
step S220: obtaining at least one traffic element and an element attribute and a state machine of each of the at least one traffic element, the element attribute at least indicating a position of the corresponding traffic element, the state machine indicating the corresponding traffic element to perform a corresponding action under a preset trigger condition; and
step S230: constructing a simulation scenario based on the vehicle attributes and the element attributes and state machines of each of the at least one traffic element.
By obtaining the vehicle attribute of the first vehicle, and the element attribute and the state machine of each traffic element in the at least one traffic element, the interactive design between the first vehicle and the traffic element is realized, and the simulation scene is constructed based on the obtained interactive design, text editing processing is not required in the whole process, and the construction efficiency of the simulation scene is improved.
In some embodiments, the first vehicle is a host vehicle in a simulated scene, centered on the host vehicle, simulating a scene of interactions between the host vehicle and other traffic elements.
In an embodiment according to the present disclosure, the vehicle property of the first vehicle is indicative of at least a moving device of the first vehicle, which may include, for example, a start position, a route position, an end position, or an initial speed of the first vehicle.
In some embodiments, the vehicle attribute of the first vehicle further includes a size of the vehicle, and the like, and is not limited herein.
In some embodiments, the at least one traffic element may be any transaction capable of interacting with the first vehicle. Such as other vehicles, pedestrians, bicycles, parking bars, taper drums, following traffic or unknown obstacles, etc.
In an embodiment in accordance with the present disclosure, the element attribute of each traffic element indicates at least a location of the traffic element. For example, the element attributes may include coordinates (horizontal and vertical coordinates in a two-dimensional scene, and spatial coordinates in a three-dimensional scene) of the traffic element. The element attribute may also include the size of the element. For example, when the traffic element is a vehicle, the element coordinates may also include the length, width, and height of the vehicle.
In an embodiment according to the present disclosure, each traffic element includes a state machine that instructs the traffic element to perform a corresponding action under a preset trigger condition.
In some embodiments, the state machine may include a preset free state in which the traffic element performs a corresponding action according to a preset condition. For example, the free state of the following traffic flow is to travel a preset distance from the first vehicle.
Referring to fig. 3A and 3B, schematic frame diagrams of traffic element running state machines when constructing a scene in an autopilot simulation scene construction method according to some embodiments of the present disclosure are shown. Fig. 3A shows a schematic process diagram in which a state machine of a traffic element enters a free state from a control state and completes scene composition through a driving engine from the beginning of a scene to the end of the scene in a scene building process, and fig. 3B shows a schematic process diagram in which the traffic element enters an action node to complete an action after being triggered by a trigger in the control state to complete the control state.
In some embodiments, the step of building a static scene is further included before obtaining the first vehicle. The static scene provides an environment in which the first vehicle and the at least one traffic element are interactively behaving. For example, a static scene is obtained by obtaining a map. In some embodiments, the map may be a user-customized map, and in other embodiments, the map may be a map obtained from a map storage device.
In some embodiments, an autopilot simulation scenario construction method according to the present disclosure includes: obtaining a high-precision map; and obtaining a static scene based on the high-precision map.
Because the high-precision map provides rich lane-level real road network data, the simulation scene constructed on the static scene constructed on the basis of the high-precision map is more real, and a more accurate automatic driving scene can be simulated.
In some embodiments, the first vehicle is obtained by obtaining a position of the first vehicle in a static scene.
For example, the first vehicle is obtained by receiving a position of the first vehicle set by a user in a static scene.
In some embodiments, the state machine comprises a control state, as shown in fig. 4, the obtaining the element attributes and the state machine of the at least one traffic element and each of the at least one traffic element comprises:
step S410: determining a first traffic element from a preset traffic element set, wherein each traffic element set in the preset traffic element set corresponds to a preset trigger node set and a preset action node set, each trigger node in the preset trigger node set has a corresponding trigger condition parameter set to be determined, and each preset action node in the preset action node set has a corresponding action parameter set to be determined; and
step S420: obtaining a node combination formed by at least one trigger node in a preset trigger node set corresponding to the first traffic element and at least one action node in a preset action node set, wherein each trigger condition parameter in at least one trigger condition parameter set corresponding to the at least one trigger node and each action parameter in at least one action parameter set corresponding to the at least one action node are determined; and
step S430: and obtaining the control state of the first traffic element based on the node combination.
By obtaining a node combination formed between at least one trigger node and at least one action node from a preset trigger node set and a preset action set corresponding to a traffic element, an interactive design between the element and a first vehicle is obtained, and convenience of a user in constructing a simulation scene is further improved.
The preset trigger node set and the preset action node set can be preset in the system according to the requirements of traffic elements and simulation scenes.
In some embodiments, the preset set of trigger nodes comprises at least one of: the system comprises a scene time trigger node, a distance trigger node, a collision time trigger node, a following vehicle distance trigger node and a collision detection trigger node; and
the action node comprises at least one of: the system comprises a static node, a tracking node, a lane changing node, a road driving node and a car following node.
In some embodiments, after each trigger condition parameter in the trigger condition parameter set corresponding to the trigger node is determined, the function of the trigger node is implemented by calling an atomic function. For example,
for a scene time trigger node, the parameters include: time _ threshold (time threshold); the calculation method is as follows: when the current scene time is greater than = time threshold value ", returning true, otherwise returning false; the corresponding atomic function is: bone time _ condition (double time _ threshold)
For a range trigger node, the parameters include: agent _ id ("-1" represents host vehicle; "other" represents obstacle), master _ agent _ id "-1" represents host vehicle; "other" represents an obstacle), type (distance calculation model, including three types of CENTER, POLYGON, HEAD _ real (non-subject-element headstock- > subject-element tailstock), and axis ("coordinate system, including three types of EULER, HORIZONTAL, VERTICAL"), in a manner including: < euclidean distance > -linear distance between master _ agent and agent, < lateral distance > -establishing a coordinate system with master _ agent position, with the vertical direction to the master _ agent orientation being the lateral direction (agent is positive on the right and negative on the left), and < longitudinal distance > -establishing a coordinate system with master _ agent position, with master _ agent orientation being the longitudinal direction (agent is positive on the front and negative on the rear); the atomic function is: double distance (int agent _ id, int master agent _ id, string type, string axis).
For the remaining collision time trigger nodes, the parameters include: agent _ id ("-1" represents a host; the "others" represent obstacles), master _ agent _ id ("-1" represents a host; the "others" represent obstacles); the return value is the remaining time for master _ agent to catch up to agent; the calculation method is as follows: the relative instantaneous speed of the linear distance/distance direction between the two objects; the atomic function is: double ttc (int agent _ id, int master agent _ id)
For a following headway trigger node, the parameters include: agent _ id ("-1" represents a host; the "others" represent obstacles), master _ agent _ id ("-1" represents a host; the "others" represent obstacles); the return value is: the remaining time for master _ agent to reach the agent's current horizontal position; the calculation method is as follows: projection of the distance between the two object bounding boxes in the vertical direction of the master _ agent/the instant speed of the master _ agent (a coordinate system is established by the position of the master _ agent, and the orientation of the master _ agent is in the vertical direction); the atomic function is: double thw (int agent _ id, int master _ agent _ id)
For collision detection trigger nodes, the parameters include: agent _ id ("-1" represents a host vehicle; "other" represents an obstacle), master _ agent _ id ("-1" represents a host vehicle; "other" represents an obstacle), check _ duration (a collision detection period, i.e., detecting whether a collision will occur within check _ duration after the current time); the return value is: whether a collision will occur (true indicates that a collision will occur); the calculation method is as follows: whether the head of the agent _ id collides with the bounding box of the master _ agent _ id in the check _ duration or not; the atomic function is: bone check _ collision (int agent _ id, int master agent _ id, double check _ duration).
It is understood that the parameter settings and the atomic function settings of the trigger nodes are only exemplary, and those skilled in the art can understand that the corresponding parameters and the atomic functions of other trigger nodes can be set as required, and are not limited herein.
In some embodiments, a user-defined composite trigger node may also be obtained, for example, by a method of function nesting. In one example, the atomic function of a compound trigger node is: ttl (2, _________________ >) -ttl (-1, ______________ >) > =3& & keep _ true (ego. Speed = =0,1) & & in _ range (thw (2, -1), 2,5).
In some embodiments, as shown in fig. 5, determining a first traffic element from a preset set of traffic elements includes:
step S510: displaying the preset traffic element set; and
step S520: in response to receiving a selection instruction for the first traffic element, displaying the first traffic element in a first preset area.
By displaying the traffic element set and determining the first traffic element based on the instruction operation of the user on the traffic element, the visual interactive experience in the simulation scene construction process is realized, and the simulation scene construction efficiency is further improved.
Referring to fig. 6, a schematic diagram of an interface displaying a set of preset traffic elements in an autopilot simulation scenario construction method according to some embodiments of the present disclosure is shown.
A preset traffic element set is displayed in an area 601 in fig. 6, where the preset traffic element set includes: the vehicle comprises a main vehicle, a vehicle, an unknown obstacle, a pedestrian, a bicycle, a parking rod, a cone and a following vehicle flow. After receiving a selection instruction of the first traffic element 601a, the first traffic element 601a is displayed in the first preset area 602.
In some embodiments, the user selection instruction for the first traffic element may be an instruction issued by a user operation on the first traffic element of the displayed preset traffic elements.
In other embodiments, the user's manipulation of the first traffic element includes: move, select, rotate, copy, paste, undo, resume, etc. Through the instructions, the first traffic element can be visually edited in the simulation scene by the user, so that the user has what you see is what you get.
In some embodiments, advanced assistance functions are obtained for user operations, including, for example, compass, ranging, full view translation, perspective/orthogonal camera switching, lane and coordinate positioning, and the like. The user operation is further simplified, and the simulation scene construction efficiency is improved.
According to some embodiments of the present disclosure, when constructing a static scene based on a map, in the process of constructing an automatic driving simulation scene, the method further includes: and identifying the traffic lights with the same meaning in the map area, and synchronously interacting data to further simplify the step of setting traffic elements by a user and simplify the operation flow.
In some embodiments, as shown in fig. 7, obtaining a node combination formed between at least one trigger node in the set of trigger nodes and at least one action node in the set of action nodes corresponding to the first traffic element includes:
step S710: displaying a preset trigger node set and a preset action node set corresponding to the first traffic element;
step S720: in response to receiving a selection instruction of a first trigger node in the preset trigger node set and a set value of each trigger condition parameter in the trigger condition parameter set corresponding to the first trigger node, displaying the first trigger node in a second preset area;
step S730: in response to receiving a selection instruction of a first action node in the preset action node set and a set value of each action parameter in the action set corresponding to the first action node, displaying the first action node in the second preset area;
step S740: in response to receiving a first operation for the first trigger node and the first action node, displaying a connection line between the first trigger node and the first action point in the second preset area to combine the first trigger node with the first action node; and
step S750: obtaining the node combination based on a combination of the first trigger node and the first action node.
The method comprises the steps of displaying a preset trigger node set and a preset action node set corresponding to traffic elements, responding to a selection instruction of a user for a first trigger node in the preset trigger node set, displaying the first trigger node in a second preset area, responding to a selection instruction of the user for a first action node in the preset action node set, displaying a second trigger node in the second preset area, responding to a first operation obtained for the first trigger node and the first action node, displaying a connection line between the first trigger node and the first action node in a second prediction area, so that a combination of the first trigger node and the first action node is obtained, obtaining a node combination based on the combination of the first trigger node and the first action node, realizing interactive design between the first traffic element and a first vehicle, realizing visual programming of the interactive design by the user in the whole process, finishing interactive behavior design of the traffic elements after the operation of inputting parameters and the connection line, and further improving the simulation scene construction efficiency.
With continued reference to FIG. 6, the combination of the first trigger node and the first action node is displayed in the second preset area 603. In some embodiments, after obtaining the vehicle attributes and the element attributes and state machines for each of the at least one traffic element, a simulation scene is newly created.
Referring to FIG. 8, a schematic diagram of a new simulation scenario created by the autopilot simulation scenario construction method according to some embodiments of the present disclosure is shown.
As shown in fig. 8, in step S810, the user places a host car in the scene;
in step S820, the user sets a host vehicle attribute, an addition destination, and a route point;
in step S830, the user places other traffic elements;
in step S840, the user sets state machines of other traffic elements, including step S841a, setting trigger nodes and step S841b, setting action nodes, to edit control state and step S842, setting free state;
and under the condition that a user needs to set a plurality of traffic elements, after each traffic element is placed and the state machine of the traffic element is set, placing the next traffic element and setting the state machine of the next traffic element, and finally, completing the construction of the scene by saving the scene.
In some embodiments, as shown in fig. 8, step S850, previewing the created scene, and step S850, adjusting the attributes of the traffic elements, the parameters of the trigger nodes and the action nodes, and the like based on the previews may also be performed before saving the scene.
In some embodiments, the method for constructing an automatic driving scenario according to the present disclosure further includes: obtaining route acquisition data obtained by an autonomous vehicle, the route acquisition data comprising kinematic state data of the autonomous vehicle and environmental data perceived by the autonomous vehicle, wherein the obtaining a first vehicle and vehicle attributes of the first vehicle comprises:
determining the autonomous vehicle as the first vehicle and obtaining vehicle properties of the first vehicle based on the kinematic state data; and wherein obtaining the element attributes and state machines for at least one traffic element and each of the at least one traffic element comprises:
based on the environmental data, element attributes and state machines of the at least one traffic element and each of the at least one traffic element are obtained.
Acquiring vehicle attributes of a first vehicle and the first vehicle, and element attributes and state machines of at least one traffic element and each traffic element in the at least one traffic element through road acquisition data to realize the construction of a simulation scene; more simulation scenes are generalized based on real road acquisition data, so that the problem of unreal manual parameter setting is solved.
In some embodiments, the constructed scenes may also be generalized, from which simulated scenes are screened to arrive at expectations.
Referring to fig. 9, the automatic driving simulation scene construction method according to some embodiments of the present disclosure further includes:
step S910: acquiring a trigger generalization parameter corresponding to each trigger condition parameter in the at least one trigger condition parameter set and an action generalization parameter corresponding to each action parameter in the at least one action parameter set, wherein the trigger fuzzy parameter indicates a range corresponding to the corresponding trigger condition parameter, and the action generalization parameter indicates a range corresponding to the corresponding action parameter; and
step S920: obtaining a generalization limiting condition corresponding to the at least one trigger condition parameter set and the at least one action parameter set;
step S930: and carrying out scene generalization based on the trigger generalization parameter corresponding to each trigger condition parameter in the at least one trigger condition parameter set, the action generalization parameter corresponding to each action parameter in the at least one action parameter set, the generalization limiting condition and the simulation scene so as to obtain a target scene.
By obtaining the trigger generalization parameters corresponding to each trigger condition parameter and the action generalization statement and generalization limiting condition corresponding to each action parameter, the method can realize parameter-level scene generalization, realize multi-latitude scene generalization and improve generalization efficiency in the scene construction process.
In the related art, by defining expected interactive behaviors of respective elements in a scene under determined parameters, it is difficult to implement and inefficient. According to the embodiment of the disclosure, the scene generalization is carried out in a wider range by defining the trigger condition parameters and the action parameters through the generalization parameters, and the scenes meeting the expectation (generalization limiting conditions) are screened out from the scene generalization, so that the efficiency of obtaining the expected simulation scenes is greatly improved.
In some embodiments, the user may obtain the trigger condition generalization parameter and the action generalization parameter by an equidistant manner and an enumerated manner. For example, in the process of obtaining the trigger condition generalization parameters by equidistant manner, the minimum value, the maximum value and the step size are set as the trigger condition generalization parameters.
Referring to FIG. 10, a flow diagram of a process for generalizing a built scene in an autopilot simulation scene building method according to some embodiments of the present disclosure is shown.
As shown in fig. 10, first, in step S1001, a generalization parameter (including an action generalization parameter and a trigger generalization parameter) is obtained.
Next, in step S1002, a parameter table is generated based on the generalized parameters, and the parameter table is displayed as a plurality of parameter combinations composed of different trigger condition parameters and different operation parameters.
Next, in step S1003, a generalization restriction condition is obtained and a plurality of parameter combinations in the parameter table are sequentially filtered based on the generalization restriction condition. In step S1002, a plurality of parameter combinations in the parameter table generated in step S1002 are cyclically input for filtering.
Then, in step S1004, the filtered parameter combinations are subjected to expression analysis, and if the analysis fails, the filtered parameter combinations are added to the constraint combinations in step S1005 and counted; if the analysis is successful, it is added to the restricted form in step S1006;
next, in step S1007, a combination of parameters in the restricted table is substituted into the scene created by the user;
next, in step S1008, the created scene is copied to generalize the scene;
then, in step S1009, submitting the generalized scenes for testing at the same time, and when the scene testing succeeds, in step S1010, adding the generalized scenes into a successful scene set, and counting; when the scene test fails, retesting the scene test in step S1011, and when the retest succeeds, adding the retest scene into the successful scene set and counting the retest scene set in step S1010; when the retest fails, it is added to the failure scene set and counted in step S1012.
In some embodiments, the method for constructing an autopilot simulation scenario according to some embodiments of the present disclosure further includes: obtaining a target metric; determining whether the target scene achieves a predicted intended goal based on the target metric.
In some embodiments, the first metric is determined to be the target metric in response to a user selecting the first metric from a preset set of metrics.
In some embodiments, the preset set of metrics includes a plurality of packet types, each packet type includes a plurality of metrics, and the target metric includes at least one metric from among a plurality of metrics corresponding to the plurality of packet types.
In some embodiments, a metric customized by a user is obtained, and the customized metric is determined to be a target metric. By obtaining the user-defined measurement, the target measurement is obtained, and the maximized measurement configuration capability is realized so as to meet the measurement requirements of different scenes.
According to another aspect of the present disclosure, there is also provided an automatic driving simulation scene constructing apparatus, as shown in fig. 11, the apparatus 1100 includes: a first vehicle obtaining unit 1110 configured to obtain a first vehicle and a vehicle property of the first vehicle, the vehicle property being indicative of at least a motion state of the first vehicle; a traffic element obtaining unit 1120 configured to obtain at least one traffic element and an element attribute and a state machine of each of the at least one traffic element, where the element attribute indicates at least a position of the corresponding traffic element, and the state machine indicates that the corresponding traffic element performs a corresponding action under a preset trigger condition; and a construction unit 1130 configured to construct a simulation scene based on the vehicle attributes and the element attributes and state machines of each of the at least one traffic element.
In some embodiments, the state machine comprises a control state, and the at least one traffic element obtaining unit 1120 comprises: the device comprises a first determining unit, a second determining unit and a third determining unit, wherein the first determining unit is configured to determine a first traffic element from preset traffic element sets, each of the preset traffic element sets corresponds to a preset trigger node set and a preset action node set, each of the preset trigger nodes has a corresponding trigger condition parameter set to be determined, and each of the preset action nodes has a corresponding action parameter set to be determined; a node combination obtaining unit, configured to obtain a node combination formed between at least one trigger node in a preset trigger node set corresponding to the first traffic element and at least one action node in a preset action node set, where each trigger condition parameter in the at least one trigger condition parameter set corresponding to the at least one trigger node and each action parameter in the at least one action parameter set corresponding to the at least one action node are determined; and a first obtaining unit configured to obtain a control state of the first traffic element based on the node combination.
In some embodiments, the first determination unit comprises: a first display unit configured to display the preset set of traffic elements; and a first response unit configured to display the first traffic element in a first preset area in response to receiving a selection instruction for the first traffic element.
In some embodiments, the node combination obtaining unit includes: the second display unit is configured to display a preset trigger node set and a preset action node set corresponding to the first traffic element; a second response unit, configured to, in response to receiving a selection instruction for a first trigger node in the preset trigger node set and a setting value for each trigger condition parameter in a trigger condition parameter set corresponding to the first trigger node, display the first trigger node in a preset area; a third response unit, configured to, in response to receiving a selection instruction for a first action node in the preset action node set and a setting value for each action parameter in the action set corresponding to the first action node, display the first action node in the preset area; a fourth response unit configured to combine the first trigger node with the first action node in response to obtaining a connection between the first trigger node and the first action node; and a node combination obtaining word unit configured to obtain the node combination based on a combination of the first trigger node and the first action node.
In some embodiments, the apparatus 1100 further comprises: a generalization parameter obtaining unit, configured to obtain a triggering generalization parameter corresponding to each triggering condition parameter in the at least one triggering condition parameter set and an action generalization parameter corresponding to each action parameter in the at least one action parameter set, where the triggering fuzzy parameter indicates a corresponding range of the triggering condition parameter, and the action generalization parameter indicates a corresponding range of the action parameter; a constraint condition obtaining unit configured to obtain a generalization constraint condition corresponding to the at least one trigger condition parameter set and the at least one action parameter set; and a scene generalization unit configured to perform scene generalization based on the trigger generalization parameter corresponding to each trigger condition parameter in the at least one trigger condition parameter set, the action generalization parameter corresponding to each action parameter in the at least one action parameter set, the generalization limiting condition and the simulation scene to obtain a target scene.
In some embodiments, the preset set of trigger nodes comprises at least one of: the system comprises a scene time trigger node, a distance trigger node, a collision time trigger node, a following vehicle distance trigger node and a collision detection trigger node; and the action node comprises at least one of: the system comprises a static node, a tracking node, a lane changing node, a road traveling node and a car following node.
In some embodiments, the apparatus 1100 further comprises: a high-precision map acquisition unit configured to acquire a high-precision map; a static scene acquisition unit configured to acquire a static scene based on the high-precision map; and wherein the construction unit comprises: a dynamic scene acquisition unit configured to obtain a dynamic scene based on the vehicle attribute and an element attribute and a state machine of each of the at least one traffic element; and a construction subunit configured to obtain the simulation scene based on the dynamic scene and the static scene.
In some embodiments, the apparatus 1100 further comprises: a road acquisition data acquisition unit configured to acquire road acquisition data acquired by an autonomous vehicle, the road acquisition data including motion state data of the autonomous vehicle and environmental data perceived by the autonomous vehicle, wherein the first vehicle acquisition unit includes: a second determination unit configured to determine the autonomous vehicle as the first vehicle and obtain a vehicle property of the first vehicle based on the moving state data; and wherein the traffic element acquiring unit includes: a traffic element obtaining subunit configured to obtain the at least one traffic element and an element attribute and a state machine of each of the at least one traffic element based on the environment data.
According to an embodiment of the present disclosure, there is also provided an electronic device, a readable storage medium, and a computer program product.
Referring to fig. 12, a block diagram of a structure of an electronic device 1200, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 12, the electronic apparatus 1200 includes a computing unit 1201, which can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM) 1202 or a computer program loaded from a storage unit 1208 into a Random Access Memory (RAM) 1203. In the RAM 1203, various programs and data necessary for the operation of the electronic apparatus 1200 may also be stored. The computing unit 1201, the ROM 1202, and the RAM 1203 are connected to each other by a bus 1204. An input/output (I/O) interface 1205 is also connected to bus 1204.
A number of components in the electronic device 1200 are connected to the I/O interface 1205, including: an input unit 1206, an output unit 1207, a storage unit 1208, and a communication unit 1209. The input unit 1206 may be any type of device capable of inputting information to the electronic device 1200, and the input unit 1206 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote controller. Output unit 1207 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. Storage unit 1208 may include, but is not limited to, magnetic or optical disks. The communication unit 1209 allows the electronic device 1200 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
Computing unit 1201 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 1201 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 1201 performs the various methods and processes described above, such as the method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 1208. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 1200 via the ROM 1202 and/or the communication unit 1209. One or more steps of the method 200 described above may be performed when the computer program is loaded into the RAM 1203 and executed by the computing unit 1201. Alternatively, in other embodiments, the computing unit 1201 may be configured to perform the method 200 in any other suitable manner (e.g., by way of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the present disclosure.

Claims (19)

1. An automatic driving simulation scene construction method comprises the following steps:
obtaining a first vehicle and vehicle properties of the first vehicle, the vehicle properties being indicative of at least a state of motion of the first vehicle;
obtaining at least one traffic element and an element attribute and a state machine of each of the at least one traffic element, the element attribute at least indicating a position of the corresponding traffic element, the state machine indicating the corresponding traffic element to perform a corresponding action under a preset trigger condition; and
constructing a simulation scenario based on the vehicle attributes and the element attributes and state machines of each of the at least one traffic element.
2. The method of claim 1, wherein the state machine comprises a control state, the obtaining element attributes and state machines for at least one traffic element and each of the at least one traffic element comprising:
determining a first traffic element from a preset traffic element set, wherein each traffic element set in the preset traffic element set corresponds to a preset trigger node set and a preset action node set, each trigger node in the preset trigger node set has a corresponding trigger condition parameter set to be determined, and each preset action node in the preset action node set has a corresponding action parameter set to be determined; and
obtaining a node combination formed by at least one trigger node in a preset trigger node set corresponding to the first traffic element and at least one action node in a preset action node set, wherein each trigger condition parameter in at least one trigger condition parameter set corresponding to the at least one trigger node and each action parameter in at least one action parameter set corresponding to the at least one action node are determined; and
based on the node combination, a control state of the first traffic element is obtained.
3. The method of claim 2, the determining a first traffic element from a preset set of traffic elements comprising:
displaying the preset traffic element set; and
in response to receiving a selection instruction for the first traffic element, displaying the first traffic element in a first preset area.
4. The method of claim 2, wherein the obtaining a node combination formed between at least one trigger node in the set of trigger nodes corresponding to the first traffic element and at least one action node in the set of action nodes comprises:
displaying a preset trigger node set and a preset action node set corresponding to the first traffic element;
in response to receiving a selection instruction of a first trigger node in the preset trigger node set and a set value of each trigger condition parameter in the trigger condition parameter set corresponding to the first trigger node, displaying the first trigger node in a second preset area;
in response to receiving a selection instruction of a first action node in the preset action node set and a set value of each action parameter in the action set corresponding to the first action node, displaying the first action node in the second preset area;
in response to receiving a first operation for the first trigger node and the first action node, displaying a connection line between the first trigger node and the first action point in the second preset area to combine the first trigger node with the first action node; and
obtaining the node combination based on a combination of the first trigger node and the first action node.
5. The method of claim 2, further comprising:
acquiring a trigger generalization parameter corresponding to each trigger condition parameter in the at least one trigger condition parameter set and an action generalization parameter corresponding to each action parameter in the at least one action parameter set, wherein the trigger fuzzy parameter indicates a range corresponding to the corresponding trigger condition parameter, and the action generalization parameter indicates a range corresponding to the corresponding action parameter; and
obtaining the generalization limiting conditions corresponding to the at least one trigger condition parameter set and the at least one action parameter set;
and performing scene generalization on the basis of the trigger generalization parameter corresponding to each trigger condition parameter in the at least one trigger condition parameter set, the action generalization parameter corresponding to each action parameter in the at least one action parameter set, the generalization limiting condition and the simulation scene to obtain a target scene.
6. The method according to any of claims 2-5, the preset set of trigger nodes comprising at least one of: the system comprises a scene time trigger node, a distance trigger node, a collision time trigger node, a following vehicle distance trigger node and a collision detection trigger node; and
the action node comprises at least one of: the system comprises a static node, a tracking node, a lane changing node, a road driving node and a car following node.
7. The method of any of claims 1-6, further comprising:
obtaining a high-precision map; and
obtaining a static scene based on the high-precision map; and wherein said constructing a simulation scene based on the vehicle attributes and the element attributes and state machines of each of the at least one traffic element comprises:
obtaining a dynamic scene based on the vehicle attributes and the element attributes and state machines of each of the at least one traffic element; and
and obtaining the simulation scene based on the dynamic scene and the static scene.
8. The method of any of claims 1-6, further comprising:
obtaining route acquisition data obtained by an autonomous vehicle, the route acquisition data comprising kinematic state data of the autonomous vehicle and environmental data perceived by the autonomous vehicle, wherein the obtaining a first vehicle and vehicle attributes of the first vehicle comprises:
determining the autonomous vehicle as the first vehicle and obtaining vehicle attributes of the first vehicle based on the motion state data; and wherein obtaining the element attributes and state machines for at least one traffic element and each of the at least one traffic element comprises:
based on the environmental data, element attributes and state machines of the at least one traffic element and each of the at least one traffic element are obtained.
9. An automatic driving simulation scene constructing apparatus, comprising:
a first vehicle acquisition unit configured to obtain a first vehicle and a vehicle property of the first vehicle, the vehicle property being indicative of at least a motion state of the first vehicle;
the traffic element acquisition unit is configured to obtain at least one traffic element and an element attribute and a state machine of each of the at least one traffic element, wherein the element attribute at least indicates a position of the corresponding traffic element, and the state machine indicates that the corresponding traffic element executes a corresponding action under a preset trigger condition; and
a construction unit configured to construct a simulation scenario based on the vehicle attributes and the element attributes and state machines of each of the at least one traffic element.
10. The apparatus of claim 9, wherein the state machine comprises a control state, the at least one traffic element acquisition unit comprising:
the device comprises a first determining unit, a second determining unit and a third determining unit, wherein the first determining unit is configured to determine a first traffic element from preset traffic element sets, each of the preset traffic element sets corresponds to a preset trigger node set and a preset action node set, each of the preset trigger nodes has a corresponding trigger condition parameter set to be determined, and each of the preset action nodes has a corresponding action parameter set to be determined; and
a node combination obtaining unit, configured to obtain a node combination formed between at least one trigger node in a preset trigger node set corresponding to the first traffic element and at least one action node in a preset action node set, where each trigger condition parameter in the at least one trigger condition parameter set corresponding to the at least one trigger node and each action parameter in the at least one action parameter set corresponding to the at least one action node are determined;
a first obtaining unit configured to obtain a control state of the first traffic element based on the node combination.
11. The apparatus of claim 10, the first determination unit comprising:
a first display unit configured to display the preset set of traffic elements; and
a first response unit configured to display the first traffic element in a first preset area in response to receiving a selection instruction of the first traffic element.
12. The apparatus of claim 10, wherein the node combination obtaining unit comprises:
the second display unit is configured to display a preset trigger node set and a preset action node set corresponding to the first traffic element;
a second response unit, configured to, in response to receiving a selection instruction for a first trigger node in the preset trigger node set and a setting value for each trigger condition parameter in a trigger condition parameter set corresponding to the first trigger node, display the first trigger node in a preset area;
a third response unit, configured to, in response to receiving a selection instruction for a first action node in the preset action node set and a setting value for each action parameter in an action set corresponding to the first action node, display the first action node in the preset area;
a fourth response unit configured to combine the first trigger node with the first action node in response to obtaining a connection between the first trigger node and the first action node; and
a node combination obtaining word unit configured to obtain the node combination based on a combination of the first trigger node and the first action node.
13. The apparatus of claim 10, further comprising:
a generalization parameter obtaining unit, configured to obtain a trigger generalization parameter corresponding to each trigger condition parameter in the at least one trigger condition parameter set and an action generalization parameter corresponding to each action parameter in the at least one action parameter set, where the trigger fuzzy parameter indicates a range corresponding to the corresponding trigger condition parameter, and the action generalization parameter indicates a range corresponding to the corresponding action parameter; and
a constraint condition obtaining unit configured to obtain a generalization constraint condition corresponding to the at least one trigger condition parameter set and the at least one action parameter set;
and the scene generalization unit is configured to perform scene generalization based on the trigger generalization parameter corresponding to each trigger condition parameter in the at least one trigger condition parameter set, the action generalization parameter corresponding to each action parameter in the at least one action parameter set, the generalization limiting condition and the simulation scene to obtain a target scene.
14. The apparatus according to any of claims 10-13, the preset set of trigger nodes comprising at least one of: a scene time trigger node, a distance trigger node, a collision time trigger node, a following vehicle distance trigger node and a collision detection trigger node; and
the action node comprises at least one of: the system comprises a static node, a tracking node, a lane changing node, a road traveling node and a car following node.
15. The apparatus of any of claims 9-14, further comprising:
a high-precision map acquisition unit configured to acquire a high-precision map;
a static scene acquisition unit configured to acquire a static scene based on the high-precision map; and wherein the construction unit comprises:
a dynamic scene acquisition unit configured to obtain a dynamic scene based on the vehicle attribute and an element attribute and a state machine of each of the at least one traffic element; and
a construction subunit configured to obtain the simulation scene based on the dynamic scene and the static scene.
16. The apparatus of any of claims 9-15, further comprising:
a road acquisition data acquisition unit configured to acquire road acquisition data acquired by an autonomous vehicle, the road acquisition data including motion state data of the autonomous vehicle and environmental data perceived by the autonomous vehicle, wherein the first vehicle acquisition unit includes:
a second determination unit configured to determine the autonomous vehicle as the first vehicle and obtain a vehicle property of the first vehicle based on the moving state data; and wherein the traffic element acquisition unit includes:
a traffic element obtaining subunit configured to obtain the at least one traffic element and an element attribute and a state machine of each of the at least one traffic element based on the environment data.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
19. A computer program product comprising a computer program, wherein the computer program realizes the method of any one of claims 1-8 when executed by a processor.
CN202211139477.9A 2022-09-19 2022-09-19 Automatic driving simulation scene construction method and device Pending CN115454861A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211139477.9A CN115454861A (en) 2022-09-19 2022-09-19 Automatic driving simulation scene construction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211139477.9A CN115454861A (en) 2022-09-19 2022-09-19 Automatic driving simulation scene construction method and device

Publications (1)

Publication Number Publication Date
CN115454861A true CN115454861A (en) 2022-12-09

Family

ID=84304046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211139477.9A Pending CN115454861A (en) 2022-09-19 2022-09-19 Automatic driving simulation scene construction method and device

Country Status (1)

Country Link
CN (1) CN115454861A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116401111A (en) * 2023-05-26 2023-07-07 中国第一汽车股份有限公司 Function detection method and device of brain-computer interface, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116401111A (en) * 2023-05-26 2023-07-07 中国第一汽车股份有限公司 Function detection method and device of brain-computer interface, electronic equipment and storage medium
CN116401111B (en) * 2023-05-26 2023-09-05 中国第一汽车股份有限公司 Function detection method and device of brain-computer interface, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112907958B (en) Road condition information determining method and device, electronic equipment and readable medium
CN113741485A (en) Control method and device for cooperative automatic driving of vehicle and road, electronic equipment and vehicle
CN114661574A (en) Method and device for acquiring sample deviation data and electronic equipment
CN114758502B (en) Dual-vehicle combined track prediction method and device, electronic equipment and automatic driving vehicle
US20230391362A1 (en) Decision-making for autonomous vehicle
CN114047760B (en) Path planning method and device, electronic equipment and automatic driving vehicle
CN115019060A (en) Target recognition method, and training method and device of target recognition model
CN115454861A (en) Automatic driving simulation scene construction method and device
CN114092660A (en) High-precision map generation method and device and vehicle for generating map
CN116533987A (en) Parking path determination method, device, equipment and automatic driving vehicle
CN115082690B (en) Target recognition method, target recognition model training method and device
CN114689074B (en) Information processing method and navigation method
CN115675528A (en) Automatic driving method and vehicle based on similar scene mining
CN113850909B (en) Point cloud data processing method and device, electronic equipment and automatic driving equipment
CN115861953A (en) Training method of scene coding model, and trajectory planning method and device
JP2022088496A (en) Method of controlling data collection, and device, electronic apparatus and medium thereof
CN114970112A (en) Method and device for automatic driving simulation, electronic equipment and storage medium
CN114655250A (en) Data generation method and device for automatic driving
CN115019278B (en) Lane line fitting method and device, electronic equipment and medium
CN114179834B (en) Vehicle parking method, device, electronic equipment, medium and automatic driving vehicle
CN115952670A (en) Automatic driving scene simulation method and device
CN116311941B (en) Main traffic flow path extraction method, device, equipment and medium
CN114333368B (en) Voice reminding method, device, equipment and medium
CN116414845A (en) Method, apparatus, electronic device and medium for updating map data
CN115096322A (en) Information processing method and navigation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination