CN111816022A - Simulation method and device for simulation scene, storage medium and electronic equipment - Google Patents

Simulation method and device for simulation scene, storage medium and electronic equipment Download PDF

Info

Publication number
CN111816022A
CN111816022A CN202010082737.8A CN202010082737A CN111816022A CN 111816022 A CN111816022 A CN 111816022A CN 202010082737 A CN202010082737 A CN 202010082737A CN 111816022 A CN111816022 A CN 111816022A
Authority
CN
China
Prior art keywords
traffic
vehicle
interactive
playing
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010082737.8A
Other languages
Chinese (zh)
Inventor
刘梦瑶
卢祺
车正平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN202010082737.8A priority Critical patent/CN111816022A/en
Publication of CN111816022A publication Critical patent/CN111816022A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • G09B9/05Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles the view from a vehicle being simulated

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present disclosure provides a simulation method, apparatus, storage medium and electronic device for simulating a scene, the simulation method comprising: constructing a traffic scene based on the traffic scene parameters; generating an interactive behavior in the traffic scene, the interactive behavior comprising a preparation phase and an interactive phase; displaying the traffic scene and playing the interactive behavior, wherein the preparation stage is played at a first playing speed, and the interactive stage is played at a second playing speed; and controlling the automatic driving vehicle to run according to a first strategy, and adjusting the first playing speed and/or the second playing speed based on the first strategy. The method and the system can reproduce the interactive behavior collected by the collection vehicle, ensure that the automatic driving vehicle effectively completes the simulation training under the interactive behavior, and adjust the driving strategy of the automatic driving according to the simulation result.

Description

Simulation method and device for simulation scene, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of scene simulation technologies, and in particular, to a simulation method and apparatus for simulating a scene, a storage medium, and an electronic device.
Background
With the continuous and deep research on the automatic driving technology, the application of the automatic driving vehicle will enter the public life more and more quickly.
In the process of research on the autonomous vehicle, in order to ensure the driving safety of the autonomous vehicle, it is necessary to perform a plurality of tests, trainings, and adjustments on each performance of the autonomous vehicle. Training is typically performed in two ways, the first: the method comprises the following steps of (1) running on a real road through a real automatic driving vehicle to carry out test training; and the second method comprises the following steps: and simulating a training scene and driving of an automatic driving vehicle by a simulator to perform simulation training.
However, the first training mode has low efficiency and consumes a large amount of manpower and material resources; the training scene of the second kind of simulation training is subject to subjective limitation of designers and fixed playback of drive test data, and although the test efficiency is improved to a certain extent, once the driving speed of a driving vehicle in the simulator is inconsistent with the driving speed of the collecting vehicle, the interaction behavior collected by the collecting vehicle cannot be reproduced, and the simulation training cannot be effectively completed.
Disclosure of Invention
In view of this, an object of the present disclosure is to provide a simulation method, device, storage medium and electronic device for simulating a scene, which can reproduce an interactive behavior acquired by a vehicle, and ensure that an automatic driving vehicle effectively completes a simulation training under the interactive behavior.
In a first aspect, the present disclosure provides a simulation method for simulating a scene, including:
constructing a traffic scene based on the traffic scene parameters;
generating an interactive behavior in the traffic scene, the interactive behavior comprising a preparation phase and an interactive phase;
displaying the traffic scene and playing the interactive behavior, wherein the preparation stage is played at a first playing speed, and the interactive stage is played at a second playing speed;
and controlling the automatic driving vehicle to run according to a first strategy, and adjusting the first playing speed and/or the second playing speed based on the first strategy.
In one possible embodiment, the constructing the traffic scene based on the traffic scene parameters includes:
acquiring state information of traffic participants through an acquisition vehicle based on a predetermined mode;
generating a position and/or a travel track of each of the traffic participants based on the state information;
and constructing a traffic scene according to the positions and/or the driving tracks of all the traffic participants.
In one possible embodiment, the predetermined manner includes: based on a lidar approach and/or based on an image acquisition device approach.
In one possible embodiment, the generating the position and/or the travel track of each of the traffic participants based on the state information includes:
generating the relative position and/or relative driving track of each traffic participant relative to the collection vehicle through a point cloud identification and tracking algorithm based on the laser radar mode; and/or the presence of a gas in the gas,
generating a two-dimensional position and/or a two-dimensional driving track through an image recognition and tracking algorithm based on the image acquisition mode, and generating a relative position and/or a relative driving track of each traffic participant relative to the acquisition vehicle through a three-dimensional reconstruction algorithm based on the two-dimensional position and/or the two-dimensional driving track;
and determining the position and/or the travel track of each traffic participant in the world coordinate system based on the pose information and the relative position and/or the relative travel track of the collection vehicle in the world coordinate system.
In one possible embodiment, the generating of the interactive behavior in the traffic scene includes a preparation phase and an interaction phase, including:
determining an interaction key object based on a predetermined condition;
determining a first starting condition of the preparation phase;
determining a second start condition for the interaction phase.
In a possible embodiment, the predetermined behavior in the predetermined condition comprises at least one of:
transverse movement perpendicular to the lane line, acceleration, deceleration, steering.
In one possible embodiment, the controlling the autonomous vehicle to travel according to a first strategy, the adjusting the first playback speed and/or the second playback speed based on the first strategy, comprises:
acquiring a vehicle speed value of the automatic driving vehicle;
and adjusting the first playing speed and/or the second playing speed based on the vehicle speed value.
In a possible implementation, the interactive behavior further includes a recovery phase, and during the playing of the interactive behavior, the recovery phase is played based on the second playing speed.
In one possible embodiment, the method further comprises:
and when a predetermined event occurs in the process of playing the interactive behavior, stopping playing the interactive behavior, and adjusting the first strategy into a second strategy.
In a second aspect, the present disclosure further provides a simulation apparatus for simulating a scene, including:
the construction module is used for constructing a traffic scene based on the traffic scene parameters;
a generation module for generating an interactive behavior in the traffic scene, the interactive behavior comprising a preparation phase and an interactive phase;
the playing module is used for displaying the traffic scene and playing the interactive behavior, wherein the preparation stage is played at a first playing speed, and the interactive stage is played at a second playing speed;
and the adjusting module is used for controlling the automatic driving vehicle to run according to a first strategy and adjusting the first playing speed and/or the second playing speed based on the first strategy.
In a third aspect, the present disclosure also provides a computer-readable storage medium, wherein the computer-readable storage medium has stored thereon a computer program, which when executed by a processor performs the steps of the simulation method for simulating a scene as described.
In a fourth aspect, the present disclosure also provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the simulation method of simulating a scene as described.
In the method, the traffic scene in the simulator is constructed by utilizing the acquired traffic scene data acquired by the acquisition vehicle; generating an interactive behavior in a traffic scene, wherein the interactive behavior comprises a preparation phase and an interactive phase; displaying a traffic scene and playing an interactive behavior, wherein the preparation stage is played at a first playing speed, and the interactive stage is played at a second playing speed; controlling the automatic driving vehicle to run according to a first strategy, adjusting a first playing speed and/or a second playing speed based on the vehicle speed value of the automatic driving vehicle, reproducing the interactive behavior acquired by the acquisition vehicle, and ensuring that the automatic driving vehicle effectively completes the simulation training under the interactive behavior. Furthermore, the driving strategy of automatic driving can be adjusted according to the simulation result.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the present disclosure or the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present disclosure, and other drawings can be obtained by those skilled in the art without inventive exercise.
FIG. 1 illustrates a flow chart of a simulation method of simulating a scene provided by the present disclosure;
FIG. 2 is a flow chart illustrating a method for obtaining traffic scene parameters and constructing a traffic scene in simulation of a simulation scene according to the present disclosure;
FIG. 3 is a flow chart illustrating the generation of an interactive behavior including a preparation phase and an interaction phase in a simulation method for simulating a scene provided by the present disclosure;
fig. 4 is a flowchart illustrating adjusting a first playback speed and/or a second playback speed in a simulation method for simulating a scene provided by the present disclosure;
FIG. 5 is a schematic diagram illustrating an exemplary simulation apparatus for simulating a scene provided by the present disclosure;
fig. 6 shows a schematic structural diagram of an electronic device provided by the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more apparent, the technical solutions of the present disclosure will be described clearly and completely below with reference to the accompanying drawings of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
To maintain the following description of the present disclosure clear and concise, detailed descriptions of known functions and known components are omitted from the present disclosure.
A first aspect of the present disclosure provides a simulation method for simulating a scene, and fig. 1 shows a flowchart of the simulation method of the present disclosure, which includes the following specific steps:
s101, constructing a traffic scene based on the traffic scene parameters.
In a specific implementation, in order to simulate a specific traffic scene, the specific traffic scene should be constructed, which requires acquiring traffic scene parameters necessary for the specific traffic scene, and using the traffic scene parameters to truly and accurately represent the desired traffic scene. In this way, by acquiring complex traffic scene parameters, such as building parameters, pedestrian parameters, other vehicle parameters, road parameters, and the like, and further constructing a traffic scene according to the traffic scene parameters, the simulation of a specific traffic scene can be realized when the simulated driving training of the automatic driving vehicle is performed.
In the embodiment of the disclosure, for example, the collection vehicle may acquire complex traffic scene parameters of a specific traffic scene, where the collection vehicle includes an automatically driven vehicle and a non-automatically driven vehicle, and in order to ensure the safety of the collection vehicle and each traffic participant when considering that there are many traffic participants (e.g., rush hours on duty, areas near tourist attractions, etc.) in an actual scene, the non-automatically driven vehicle may be used to acquire and acquire the traffic scene parameters; when the number of traffic participants in an actual scene is small (for example, in an area with a small number of people in the middle of night, a small number of people in the area, and the like), the automatic driving vehicle can be used for collecting and acquiring traffic scene parameters, so that the labor cost is reduced, and the collection and acquisition efficiency of the traffic scene parameters is improved to a certain extent.
Specifically, according to the method shown in fig. 2, a traffic scene is constructed based on traffic scene parameters, wherein the specific steps of obtaining the traffic scene parameters of the traffic participants and constructing the traffic scene based on the traffic scene parameters are as follows:
s201, acquiring the state information of the traffic participants through the collection vehicle based on a preset mode.
In specific implementation, the collection vehicle is used for driving according to a certain vehicle speed value, and the state information of the traffic participants is acquired based on a preset mode. Here, the traffic participants include not only the targets of buildings, roads, traffic lights and the like in a static state, but also the traffic participants who are in a dynamic state in real time, and in the process of acquiring the state information of the traffic participants, the traffic participants include all pedestrians, vehicles, traffic lights and the like in the traffic scene where the collection vehicle is located. For convenience of explanation, the preset area is taken as an example for detailed explanation in the present disclosure.
Further, the state information of the traffic participants includes position and posture information of pedestrians/vehicles, time information corresponding to the position and posture information of the pedestrians/vehicles, position information of buildings, position information and transformation information of traffic indicator lamps, and the like.
The predetermined mode in the present disclosure includes a laser radar mode or an image acquisition device-based mode, and may also include other modes. Specifically, in actual use, in order to ensure that comprehensive traffic scene parameters can be acquired, laser radars or image acquisition equipment can be arranged at the top, the head and the tail of the acquisition vehicle. Considering that the laser radar and the image acquisition equipment are influenced by external factors such as weather and traffic participant density, the laser radar and the image acquisition equipment can be simultaneously arranged in a more complex traffic scene or a traffic scene with a higher traffic participant density so as to ensure the accuracy and comprehensiveness of traffic scene parameters.
And S202, generating the position and/or the driving track of each traffic participant based on the state information.
Here, after the status information of each traffic participant is acquired, the position and/or the travel track of each traffic participant is generated based on all the status information. Specifically, based on the state information of the traffic participants corresponding to each collection time point, a position parameter of each traffic participant at each time point is determined, the position may be a relative position parameter with the collection vehicle or a position parameter generated based on world coordinates, after the position parameter of the traffic participant at each time point is determined, all the position parameters of each traffic participant are subjected to association processing, the position of each traffic participant can be obtained for the traffic participants in a stationary state, and the driving track of each traffic participant can be obtained for the traffic participants in a real-time dynamic state.
The driving track of each traffic participant can be generated in the following two generation manners, wherein the following are specific:
the first generation mode is as follows: and under the condition of adopting a laser radar-based mode, generating the relative position and/or the relative driving track of each traffic participant relative to the collection vehicle through a point cloud identification and tracking algorithm.
When the state information of each traffic participant is acquired by using a laser radar mode, the laser radar installed on the acquisition vehicle acquires and acquires the state information of each traffic participant in real time in the driving process of the acquisition vehicle, wherein the state information at least comprises the distance, the direction, the height, the speed, the posture, the shape and the like of each traffic participant.
Identifying the state information by using point cloud identification to obtain point cloud data of each traffic participant; and then, calculating point cloud data by using a tracking algorithm, and generating the relative position and/or relative driving track of each traffic participant relative to the collection vehicle.
After generating the relative position and/or the relative travel track of each traffic participant relative to the collection vehicle, the position and/or the travel track of each traffic participant in the world coordinate system is determined based on the pose information and the relative position and/or the relative travel track of the collection vehicle in the world coordinate system. And acquiring pose information of the acquisition vehicle under a world coordinate system by using a positioning system arranged on the acquisition vehicle.
The second generation mode is as follows: under the condition of adopting a mode based on image acquisition equipment, generating a two-dimensional position and/or a two-dimensional driving track through an image recognition and tracking algorithm, and generating a relative position and/or a relative driving track of each traffic participant relative to an acquisition vehicle through a three-dimensional reconstruction algorithm based on the two-dimensional position and/or the driving track.
When the state information of each traffic participant is acquired by using an image acquisition device, the image information in the preset area is acquired by using a camera (or other camera devices), wherein the image information comprises the state image of each traffic participant. After the image information in the preset area is obtained and the image states of all the traffic participants are included, identifying and obtaining the state information of all the traffic participants by using an image identification algorithm and a tracking algorithm, and further calculating to obtain the position and/or the driving track of each traffic participant on the two-dimensional image; and then, based on the two-dimensional position and/or the driving track, restoring the position and/or the driving track on the two-dimensional image to a three-dimensional scene through a three-dimensional reconstruction algorithm to obtain the relative position and/or the relative driving track of each traffic participant relative to the collection vehicle. Likewise, after the relative position and/or the relative travel track of each traffic participant relative to the collection vehicle is generated, the position and/or the travel track of each traffic participant in the world coordinate system is determined based on the pose information and the relative position and/or the relative travel track of the collection vehicle in the world coordinate system.
When the camera (or other camera equipment) only has a photographing function, the acquisition frequency can be set, and the camera (or other camera equipment) is used for acquiring image information in a preset area according to the acquisition frequency; when the camera equipment has the video recording function, the image information in the preset area is collected by using the video recording function of the camera equipment.
Compared with the two generation modes, the accuracy of the relative position and/or the relative travel track of each traffic participant relative to the collection vehicle obtained by the second generation mode is lower than that of each traffic participant relative to the collection vehicle obtained by the first generation mode, but the cost of the camera (or other camera equipment) is lower than that of the laser radar, so that the relative position and/or the relative travel track of each traffic participant relative to the collection vehicle obtained by the second generation mode can be preferentially selected under the condition of low requirement on accuracy; in the case of a high requirement for accuracy, the relative position and/or the relative travel path of each road user with respect to the collection vehicle obtained in the first manner can be preferred.
It is worth to be noted that a series of generation calculations performed after image information, point cloud data, and pose data in a preset area are obtained may be performed on an acquisition vehicle or a server. When the network environment where the collection vehicle is currently located is poor, the calculation of the image information, the point cloud data and the pose data can be completed on the collection vehicle, and then the calculation result is uploaded to the server, so that the problems of resource waste and the like caused by uploading of the image information, the point cloud data and the pose data in the poor network environment are solved.
And S203, constructing a traffic scene according to the positions and/or the driving tracks of all the traffic participants.
The construction of the traffic scene mainly focuses on the states of static and dynamic traffic participants, and after the position and/or the driving track of each traffic participant is generated, the traffic scene is constructed according to the positions and/or the driving tracks of all the traffic participants, that is, the traffic scene where the collection vehicle acquires the traffic scene data is truly simulated, and the constructed traffic scene not only comprises a building in a static state and the like, but also comprises the states of the traffic participants in real time and dynamic states.
And S102, generating an interactive behavior in the traffic scene, wherein the interactive behavior comprises a preparation phase and an interactive phase.
In a specific implementation, the interactive behavior in the traffic scene refers to a specific mutual behavior that needs to be designed to occur between the autonomous vehicle and any traffic participant after the construction of the specific traffic scene, so as to facilitate simulation of a coping-up behavior of the autonomous vehicle after facing the interactive behavior, where the interactive behavior at least includes lateral movement, acceleration, deceleration, and steering of the relevant traffic participant perpendicular to a lane line, and of course, other traffic behaviors are also included.
Specifically, the interactive behavior includes a preparation phase and an interactive phase, and the interactive behavior including the preparation phase and the interactive phase is generated by referring to the method shown in fig. 3, where the specific steps are as follows:
s301, determining the interaction key object based on the predetermined condition.
In the process of generating the interaction behavior, the interaction key object needs to be determined first. In this step, the interactive key objects are determined based on predetermined conditions, where the predetermined conditions are conditions that are satisfied by the traffic participants who will have interactive behavior between the collection vehicle for simulating the autonomous vehicle and any traffic participant, including a predetermined distance, a predetermined relative orientation, a predetermined behavior, and the like. For example, a traffic participant who is in front of the collection vehicle and is away from the collection vehicle by a predetermined distance, is in a predetermined relative orientation, and has no other object (pedestrian, vehicle, etc.) between the collection vehicles is determined to be an interactive key object when the traffic participant performs a predetermined behavior, such as lateral movement perpendicular to a lane line, acceleration, deceleration, turning, etc., and the driving state of the collection vehicle changes.
S302, determining a first starting condition of the preparation phase.
In order to ensure that the interactive behavior can be completely simulated in the simulation process, the preparation stage is set before the interactive stage, specifically, the starting point of the interactive stage is taken as the end point of the preparation stage, and the position which is a second preset distance away from the end point of the preparation stage is taken as the starting point of the preparation stage.
When the interactive key objects are determined through the steps, the collection vehicle or the automatic driving vehicle in the unmanned driving simulation enters a traffic scene, and firstly a preparation phase of the interactive behavior is faced, so that a first starting condition of the preparation phase of the interactive behavior needs to be established, for example, when the distance between the collection vehicle and the interactive key objects positioned in front of the collection vehicle is less than or equal to a first preset distance of 80 meters and is in a preset direction, the collection vehicle is determined to enter the preparation phase of the interactive behavior with the interactive key vehicles, and meanwhile, the current time information is recorded as the first time information of the preparation phase of the interactive behavior.
S303, determining a second starting condition of the interaction stage.
After undergoing the preparation phase of the interactive behavior, the interaction phase of the interactive behavior will be entered between the collection vehicle or the autonomous vehicle in the unmanned state and the interactive key object, and a second starting condition of the interaction phase is determined based on the collection vehicle, for example, the interaction phase of the interactive behavior will be entered when the distance between the collection vehicle and the interactive key object located in front of the collection vehicle is less than or equal to a second preset distance of 30 meters and in a predetermined orientation. At this time, when the interactive key object generates a predetermined behavior, for example, lateral movement perpendicular to the lane line, and the driving state of the collection vehicle changes, the interactive stage of the interactive behavior is determined to be started, and the current time information is recorded as the second time information of the interactive stage of the interactive behavior.
S103, displaying the traffic scene and playing the interactive behavior, wherein the preparation stage is played at a first playing speed, and the interactive stage is played at a second playing speed.
Through steps S101 and S102, after a specific traffic scene is constructed and an interactive behavior is generated in the traffic scene, the traffic scene is to be displayed, and the interactive behavior is played during the displaying of the traffic scene, so as to facilitate control of a coping strategy of an autonomous vehicle in the face of the interactive behavior in the traffic scene.
In the actual simulation, the preparation stage is played at a first playing speed, and the interaction stage is played at a second playing speed. Here, the first play speed and the second play speed may be the same or different, for example, the first play speed may be greater than the second play speed, so that the preparation stage of the interactive behavior can achieve the effect of fast forward, for example, the first play speed or the second play speed may be customized by the user in a preset manner.
After the traffic scene begins to be displayed, in the process that the automatic driving vehicle runs in the constructed traffic scene, when the first starting condition of the preparation stage is determined to be met, the preparation stage is played based on a first playing speed; and when the second starting condition of the interaction phase is determined to be met, playing the interaction phase based on the second playing speed.
And S104, controlling the automatic driving vehicle to run according to a first strategy, and adjusting the first playing speed and/or the second playing speed based on the first strategy.
In the concrete implementation, when the interactive behavior is played in the constructed traffic scene through the step S103, the automatic driving vehicle is controlled to run according to the first strategy, in particular implementations, an autonomous vehicle is controlled to travel according to a first strategy to enable the autonomous vehicle to travel in a particular traffic scenario, if the first strategy of the autonomous vehicle is the same as the collection of the vehicle's property state, after the first starting condition of the preparatory phase of the interactive behaviour is met, for example, when the distance between the automatic driving vehicle and the interactive key vehicle is 80 meters, which is the first preset distance, based on the first playing speed, and after a second starting condition of the interaction stage of the interaction behavior is met, for example, when the distance between the automatic driving vehicle and the interaction key vehicle at the moment meets a first preset distance, namely 30 meters, playing the interaction stage based on a second playing speed. In the running process of the automatic driving vehicle, the first playing speed and/or the second playing speed are/is adjusted based on the first strategy so as to ensure that the traffic scenes collected by the collection vehicle and the interactive behaviors included in the traffic scenes are reproduced, and the automatic driving vehicle is ensured to effectively finish the simulation training under the interactive behaviors.
However, in the process that the autonomous vehicle often travels in the autonomous vehicle, if the first policy is different from the traveling state of the collection vehicle, the first play speed and/or the second play speed needs to be adjusted based on the first policy, so as to ensure that the traffic scene collected by the collection vehicle and the interactive behavior included therein are reproduced, and ensure that the autonomous vehicle effectively completes the simulation training under the interactive behavior, specifically, the method includes:
s401, a vehicle speed value of the automatic driving vehicle is obtained.
Generally, when the simulated autonomous vehicle runs, the running state of the autonomous vehicle and the running state of the collection vehicle may be the same, for example, have the same vehicle speed value, but the running state of the autonomous vehicle may change, for example, the vehicle speed value for the autonomous vehicle and the vehicle speed value of the collection vehicle in the first strategy are different, which requires that the vehicle speed value of the autonomous vehicle is obtained first.
S402, adjusting the first playing speed and/or the second playing speed based on the vehicle speed value.
As described above, in controlling the autonomous vehicle to travel according to the first strategy, the first strategy is to control the vehicle speed value of the autonomous vehicle. If the speed value of the automatic driving vehicle is different from the speed value of the collecting vehicle or the speed value fluctuates, the automatic driving vehicle can not accurately enter the preparation stage and the interaction stage of the interaction behavior according to the time information predetermined by the collecting vehicle when driving in the constructed traffic scene. Therefore, after the vehicle speed value of the automatic driving vehicle is acquired, the first playing speed/or the second playing speed is/are adjusted based on the vehicle speed value, and the adjusted first playing speed/second playing speed is/are used for simulation training, so that the interactive behavior acquired by the acquisition vehicle can be reproduced, and the automatic driving vehicle can effectively complete the simulation training under the interactive behavior. For example, if the vehicle speed value of the autonomous vehicle is increased by 10% relative to the vehicle speed of the collection vehicle, the first playing speed and the second playing speed may be increased simultaneously according to a proportion of 10%, or the preparation stage of playing the interactive behavior according to the original first playing speed, but the interactive stage of playing the interactive behavior is performed by increasing the second playing speed according to a proportion of 20%. It should be noted that, before the simulation training, a user-defined strategy may be set manually, that is, the autonomous vehicle is controlled to run according to the user-defined strategy, and the first play speed and/or the second play speed are/is adjusted based on the user-defined strategy.
Further, the interactive behavior further comprises a recovery phase, and during the playing of the interactive behavior, the recovery phase is played based on the second playing speed. The starting point of the recovery phase is the end point of the interaction phase, that is, the preset phase after the interaction is completed.
The recovery phase may be within a distance after the end point of the interaction phase, within a time range after the point in time at which the interaction phase is completed, etc.
In addition, the simulation method of the embodiment of the present disclosure further includes: when a predetermined event occurs in the process of playing the interactive behavior, the playing of the interactive behavior is stopped, and the first strategy is adjusted to the second strategy.
When the traffic scene is used for completing the simulation training of the automatic driving vehicle, when a preset event occurs in the process of playing the interactive behavior, the playing of the interactive behavior is stopped. The predetermined event includes rear-end collision, red light running and the like.
The training result, the first strategy, the first playing speed and the second playing speed of the simulation training are obtained, the first strategy is adjusted to be the second strategy based on the training result, the first strategy, the first playing speed and the second playing speed, namely the first strategy is optimized, and the automatic driving vehicle can be ensured to cope with various interactive behaviors and run safely.
Based on the same inventive concept, the second aspect of the present disclosure further provides a simulation apparatus for simulating a scene corresponding to the simulation method for simulating a scene, and since the principle of the apparatus in the present disclosure for solving the problem is similar to the simulation method for simulating a scene in the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 5, the simulation apparatus for simulating a scene includes: a building module 10, a generating module 20, a playing module 30 and an adjusting module 40. The construction module 10 is configured to construct a traffic scene based on the traffic scene parameters.
In a specific implementation, in order to simulate a specific traffic scene, the specific traffic scene should be constructed, which requires acquiring traffic scene parameters necessary for the specific traffic scene, and using the traffic scene parameters to truly and accurately represent the desired traffic scene. In this way, by acquiring complex traffic scene parameters, such as building parameters, pedestrian parameters, other vehicle parameters, road parameters, and the like, and further constructing a traffic scene according to the traffic scene parameters, the simulation of a specific traffic scene can be realized when the simulated driving training of the automatic driving vehicle is performed.
In the embodiment of the disclosure, for example, the collection vehicle may acquire complex traffic scene parameters of a specific traffic scene, where the collection vehicle includes an automatically driven vehicle and a non-automatically driven vehicle, and in order to ensure the safety of the collection vehicle and each traffic participant when considering that there are many traffic participants (e.g., rush hours on duty, areas near tourist attractions, etc.) in an actual scene, the non-automatically driven vehicle may be used to acquire and acquire the traffic scene parameters; when the number of traffic participants in an actual scene is small (for example, in an area with a small number of people in the middle of night, a small number of people in the area, and the like), the automatic driving vehicle can be used for collecting and acquiring traffic scene parameters, so that the labor cost is reduced, and the collection and acquisition efficiency of the traffic scene parameters is improved to a certain extent.
The building module 10 of the present disclosure includes a first obtaining unit, a generating unit, and a building unit; the first acquisition unit is used for acquiring the state information of the traffic participants through the acquisition vehicle based on a predetermined mode.
In specific implementation, the collection vehicle is used for driving according to a certain vehicle speed value, and the state information of the traffic participants is acquired based on a preset mode. Here, the traffic participants include not only the targets of buildings, roads, traffic lights and the like in a static state, but also the traffic participants who are in a dynamic state in real time, and in the process of acquiring the state information of the traffic participants, the traffic participants include all pedestrians, vehicles, traffic lights and the like in the traffic scene where the collection vehicle is located. For convenience of explanation, the preset area is taken as an example for detailed explanation in the present disclosure.
Further, the state information of the traffic participants includes position and posture information of pedestrians/vehicles, time information corresponding to the position and posture information of the pedestrians/vehicles, position information of buildings, position information and transformation information of traffic indicator lamps, and the like.
The predetermined mode in the present disclosure includes a laser radar mode or an image acquisition device-based mode, and may also include other modes. Specifically, in actual use, in order to ensure that comprehensive traffic scene parameters can be acquired, laser radars or image acquisition equipment can be arranged at the top, the head and the tail of the acquisition vehicle. Considering that the laser radar and the image acquisition equipment are influenced by external factors such as weather and traffic participant density, the laser radar and the image acquisition equipment can be simultaneously arranged in a more complex traffic scene or a traffic scene with a higher traffic participant density so as to ensure the accuracy and comprehensiveness of traffic scene parameters.
And the generating unit is used for generating the position and/or the driving track of each traffic participant based on the state information.
Here, after the status information of each traffic participant is acquired, the position and/or the travel track of each traffic participant is generated based on all the status information. Specifically, based on the state information of the traffic participants corresponding to each collection time point, a position parameter of each traffic participant at each time point is determined, the position may be a relative position parameter with the collection vehicle or a position parameter generated based on world coordinates, after the position parameter of the traffic participant at each time point is determined, all the position parameters of each traffic participant are subjected to association processing, the position of each traffic participant can be obtained for the traffic participants in a stationary state, and the driving track of each traffic participant can be obtained for the traffic participants in a real-time dynamic state.
The driving track of each traffic participant can be generated in the following two generation manners, wherein the following are specific:
the first generation mode is as follows: and under the condition of adopting a laser radar-based mode, generating the relative position and/or the relative driving track of each traffic participant relative to the collection vehicle through a point cloud identification and tracking algorithm.
When the state information of each traffic participant is acquired by using a laser radar mode, the laser radar installed on the acquisition vehicle acquires and acquires the state information of each traffic participant in real time in the driving process of the acquisition vehicle, wherein the state information at least comprises the distance, the direction, the height, the speed, the posture, the shape and the like of each traffic participant.
Identifying the state information by using point cloud identification to obtain point cloud data of each traffic participant; and then, calculating point cloud data by using a tracking algorithm, and generating the relative position and/or relative driving track of each traffic participant relative to the collection vehicle.
After generating the relative position and/or the relative travel track of each traffic participant relative to the collection vehicle, the position and/or the travel track of each traffic participant in the world coordinate system is determined based on the pose information and the relative position and/or the relative travel track of the collection vehicle in the world coordinate system. And acquiring pose information of the acquisition vehicle under a world coordinate system by using a positioning system arranged on the acquisition vehicle.
The second generation mode is as follows: under the condition of adopting a mode based on image acquisition equipment, generating a two-dimensional position and/or a two-dimensional driving track through an image recognition and tracking algorithm, and generating a relative position and/or a relative driving track of each traffic participant relative to an acquisition vehicle through a three-dimensional reconstruction algorithm based on the two-dimensional position and/or the driving track.
When the state information of each traffic participant is acquired by using an image acquisition device, the image information in the preset area is acquired by using a camera (or other camera devices), wherein the image information comprises the state image of each traffic participant. After the image information in the preset area is obtained and the image states of all the traffic participants are included, identifying and obtaining the state information of all the traffic participants by using an image identification algorithm and a tracking algorithm, and further calculating to obtain the position and/or the driving track of each traffic participant on the two-dimensional image; and then, based on the two-dimensional position and/or the driving track, restoring the position and/or the driving track on the two-dimensional image to a three-dimensional scene through a three-dimensional reconstruction algorithm to obtain the relative position and/or the relative driving track of each traffic participant relative to the collection vehicle. Likewise, after the relative position and/or the relative travel track of each traffic participant relative to the collection vehicle is generated, the position and/or the travel track of each traffic participant in the world coordinate system is determined based on the pose information and the relative position and/or the relative travel track of the collection vehicle in the world coordinate system.
When the camera (or other camera equipment) only has a photographing function, the acquisition frequency can be set, and the camera (or other camera equipment) is used for acquiring image information in a preset area according to the acquisition frequency; when the camera equipment has the video recording function, the image information in the preset area is collected by using the video recording function of the camera equipment.
Compared with the two generation modes, the accuracy of the relative position and/or the relative travel track of each traffic participant relative to the collection vehicle obtained by the second generation mode is lower than that of each traffic participant relative to the collection vehicle obtained by the first generation mode, but the cost of the camera (or other camera equipment) is lower than that of the laser radar, so that the relative position and/or the relative travel track of each traffic participant relative to the collection vehicle obtained by the second generation mode can be preferentially selected under the condition of low requirement on accuracy; in the case of a high requirement for accuracy, the relative position and/or the relative travel path of each road user with respect to the collection vehicle obtained in the first manner can be preferred.
It is worth to be noted that a series of generation calculations performed after image information, point cloud data, and pose data in a preset area are obtained may be performed on an acquisition vehicle or a server. When the network environment where the collection vehicle is currently located is poor, the calculation of the image information, the point cloud data and the pose data can be completed on the collection vehicle, and then the calculation result is uploaded to the server, so that the problems of resource waste and the like caused by uploading of the image information, the point cloud data and the pose data in the poor network environment are solved.
And the construction unit is used for constructing a traffic scene according to the positions and/or the driving tracks of all the traffic participants.
The construction of the traffic scene mainly focuses on the state of the dynamic traffic participants, and after the position and/or the driving track of each traffic participant is generated, the traffic scene is constructed according to the driving tracks of all the traffic participants, that is, the traffic scene where the collection vehicle acquires the traffic scene data is truly simulated, and the constructed traffic scene not only comprises a building in a static state and the like, but also comprises the state of the real-time dynamic traffic participants.
The generation module 20 is configured to generate an interactive behavior in the traffic scene, where the interactive behavior includes a preparation phase and an interactive phase.
In a specific implementation, the interactive behavior in the traffic scene refers to a specific mutual behavior that needs to be designed to occur between the autonomous vehicle and any traffic participant after the construction of the specific traffic scene, so as to facilitate simulation of a coping-up behavior of the autonomous vehicle after facing the interactive behavior, where the interactive behavior at least includes lateral movement, acceleration, deceleration, and steering of the relevant traffic participant perpendicular to a lane line, and of course, other traffic behaviors are also included.
The generation module 20 of the present disclosure comprises a first determination unit for determining interactive key objects based on predetermined conditions and determining a starting condition of the preparation phase of the interactive behaviour and first time information in case of a change of the driving state of the collection vehicle.
Here, the predetermined condition is a condition for simulating an interactive behavior between the collection vehicle of the autonomous vehicle and any traffic participant, including a distance, a relative orientation, an interactive event, and the like. For example, a traffic participant who is at a distance less than or equal to a first preset distance from the collection vehicle, is at a predetermined orientation, and has no other object (pedestrian, vehicle, etc.) with the collection vehicle is taken as an interaction key object, and the interaction key object has a certain interaction event, such as deceleration, lane change, etc.
When the driving state of the collection vehicle changes, the linear distance between the interactive key object and the automatic driving vehicle is the sum of the first preset distance and the second preset distance, and when the linear distance is in the preset direction, the current time information is used as the first time information of the preparation stage of the interactive behavior, namely the time information of the beginning of the preparation stage of the interactive behavior.
The generation module 20 of the present disclosure comprises a first determination unit for determining the interaction critical object based on a predetermined condition.
In the process of generating the interaction behavior, the interaction key object needs to be determined first. In this step, the interactive key objects are determined based on predetermined conditions, where the predetermined conditions are conditions that are satisfied by the traffic participants who will have interactive behavior between the collection vehicle for simulating the autonomous vehicle and any traffic participant, including a predetermined distance, a predetermined relative orientation, a predetermined behavior, and the like. For example, a traffic participant who is in front of the collection vehicle and is away from the collection vehicle by a predetermined distance, is in a predetermined relative orientation, and has no other object (pedestrian, vehicle, etc.) between the collection vehicles is determined to be an interactive key object when the traffic participant performs a predetermined behavior, such as lateral movement perpendicular to a lane line, acceleration, deceleration, turning, etc., and the driving state of the collection vehicle changes.
The generation module 20 of the present disclosure further comprises a second determination unit for determining a first start condition of the preparation phase.
In order to ensure that the interactive behavior can be completely simulated in the simulation process, the preparation stage is set before the interactive stage, specifically, the starting point of the interactive stage is taken as the end point of the preparation stage, and the position which is a second preset distance away from the end point of the preparation stage is taken as the starting point of the preparation stage.
When the interactive key objects are determined through the steps, the collection vehicle or the automatic driving vehicle in the unmanned driving simulation enters a traffic scene, and firstly a preparation phase of the interactive behavior is faced, so that a first starting condition of the preparation phase of the interactive behavior needs to be established, for example, when the distance between the collection vehicle and the interactive key objects positioned in front of the collection vehicle is less than or equal to a first preset distance of 80 meters and is in a preset direction, the collection vehicle is determined to enter the preparation phase of the interactive behavior with the interactive key vehicles, and meanwhile, the current time information is recorded as the first time information of the preparation phase of the interactive behavior.
The generating module 20 of the present disclosure further comprises a third determining unit for determining a second start condition of the interaction phase.
After undergoing the preparation phase of the interactive behavior, the interaction phase of the interactive behavior will be entered between the collection vehicle or the autonomous vehicle in the unmanned state and the interactive key object, and a second starting condition of the interaction phase is determined based on the collection vehicle, for example, the interaction phase of the interactive behavior will be entered when the distance between the collection vehicle and the interactive key object located in front of the collection vehicle is less than or equal to a second preset distance of 30 meters and in a predetermined orientation. At this time, when the interactive key object generates a predetermined behavior, for example, lateral movement perpendicular to the lane line, and the driving state of the collection vehicle changes, the interactive stage of the interactive behavior is determined to be started, and the current time information is recorded as the second time information of the interactive stage of the interactive behavior.
The playing module 30 of the present disclosure is used for displaying traffic scenes and playing interactive behaviors, wherein the preparation stage is played at a first playing speed, and the interactive stage is played at a second playing speed.
Through the steps of the building module 10 and the generating module 20, after a specific traffic scene is built and the interactive behavior is generated in the traffic scene, the traffic scene is to be displayed, and the interactive behavior is played in the process of displaying the traffic scene, so that the coping strategy of the automatic driving vehicle facing the interactive behavior in the traffic scene is conveniently controlled.
In the actual simulation, the preparation stage is played at a first playing speed, and the interaction stage is played at a second playing speed. Here, the first play speed and the second play speed may be the same or different, for example, the first play speed may be greater than the second play speed, so that the preparation stage of the interactive behavior can achieve the effect of fast forward, for example, the first play speed or the second play speed may be customized by the user in a preset manner.
After the traffic scene begins to be displayed, in the process that the automatic driving vehicle runs in the constructed traffic scene, when the first starting condition of the preparation stage is determined to be met, the preparation stage is played based on a first playing speed; and when the second starting condition of the interaction phase is determined to be met, playing the interaction phase based on the second playing speed.
The adjusting module 40 of the present disclosure is configured to control the autonomous vehicle to travel according to a first strategy, and adjust the first playback speed and/or the second playback speed based on the first strategy.
In particular implementations, when the interactive behavior is played back in the constructed traffic scene by the playback module 30, the autonomous vehicle is controlled to travel according to the first strategy, in particular implementations, an autonomous vehicle is controlled to travel according to a first strategy to enable the autonomous vehicle to travel in a particular traffic scenario, if the first strategy of the autonomous vehicle is the same as the collection of the vehicle's property state, after the first starting condition of the preparatory phase of the interactive behaviour is met, for example, when the distance between the automatic driving vehicle and the interactive key vehicle is 80 meters, which is the first preset distance, based on the first playing speed, and after a second starting condition of the interaction stage of the interaction behavior is met, for example, when the distance between the automatic driving vehicle and the interaction key vehicle at the moment meets a first preset distance, namely 30 meters, playing the interaction stage based on a second playing speed. In the running process of the automatic driving vehicle, the first playing speed and/or the second playing speed are/is adjusted based on the first strategy so as to ensure that the traffic scenes collected by the collection vehicle and the interactive behaviors included in the traffic scenes are reproduced, and the automatic driving vehicle is ensured to effectively finish the simulation training under the interactive behaviors.
However, in the process that the autonomous vehicle travels, if the first policy is different from the traveling state of the collection vehicle, the first play speed and/or the second play speed needs to be adjusted based on the first policy, so as to ensure that the traffic scene collected by the collection vehicle and the interactive behavior included therein are reproduced, and ensure that the autonomous vehicle effectively completes the simulation training under the interactive behavior. Specifically, the adjustment module 40 includes a second acquisition unit for acquiring a vehicle speed value of the autonomous vehicle.
Generally, when the simulated autonomous vehicle runs, the running state of the autonomous vehicle and the running state of the collection vehicle may be the same, for example, have the same vehicle speed value, but the running state of the autonomous vehicle may change, for example, the vehicle speed value for the autonomous vehicle and the vehicle speed value of the collection vehicle in the first strategy are different, which requires that the vehicle speed value of the autonomous vehicle is obtained first.
The adjusting module 40 further comprises an adjusting unit for adjusting the first playback speed and/or the second playback speed based on the vehicle speed value.
As described above, in controlling the autonomous vehicle to travel according to the first strategy, the first strategy is to control the vehicle speed value of the autonomous vehicle. If the speed value of the automatic driving vehicle is different from the speed value of the collecting vehicle or the speed value fluctuates, the automatic driving vehicle can not accurately enter the preparation stage and the interaction stage of the interaction behavior according to the time information predetermined by the collecting vehicle when driving in the constructed traffic scene. Therefore, after the vehicle speed value of the automatic driving vehicle is acquired, the first playing speed/or the second playing speed is/are adjusted based on the vehicle speed value, and the adjusted first playing speed/second playing speed is/are used for simulation training, so that the interactive behavior acquired by the acquisition vehicle can be reproduced, and the automatic driving vehicle can effectively complete the simulation training under the interactive behavior. For example, if the vehicle speed value of the autonomous vehicle is increased by 10% relative to the vehicle speed of the collection vehicle, the first playing speed and the second playing speed may be increased simultaneously according to a proportion of 10%, or the preparation stage of playing the interactive behavior according to the original first playing speed, but the interactive stage of playing the interactive behavior is performed by increasing the second playing speed according to a proportion of 20%. It should be noted that, before the simulation training, a user-defined strategy may be set manually, that is, the autonomous vehicle is controlled to run according to the user-defined strategy, and the first play speed and/or the second play speed are/is adjusted based on the user-defined strategy.
Further, the interactive behavior further comprises a recovery phase, and during the playing of the interactive behavior, the recovery phase is played based on the second playing speed. The starting point of the recovery phase is the end point of the interaction phase, that is, the preset phase after the interaction is completed.
The recovery phase may be within a distance after the end point of the interaction phase, within a time range after the point in time at which the interaction phase is completed, etc.
In addition, the simulation method of the embodiment of the present disclosure further includes: when a predetermined event occurs in the process of playing the interactive behavior, the playing of the interactive behavior is stopped, and the first strategy is adjusted to the second strategy.
When the traffic scene is used for completing the simulation training of the automatic driving vehicle, when a preset event occurs in the process of playing the interactive behavior, the playing of the interactive behavior is stopped. The predetermined event includes rear-end collision, red light running and the like.
The training result, the first strategy, the first playing speed and the second playing speed of the simulation training are obtained, the first strategy is adjusted to be the second strategy based on the training result, the first strategy, the first playing speed and the second playing speed, namely the first strategy is optimized, and the automatic driving vehicle can be ensured to cope with various interactive behaviors and run safely.
The method comprises the steps of constructing a traffic scene in a simulator by utilizing acquired traffic scene data acquired by an acquisition vehicle; generating an interactive behavior in a traffic scene, wherein the interactive behavior comprises a preparation phase and an interactive phase; displaying a traffic scene and playing an interactive behavior, wherein the preparation stage is played at a first playing speed, and the interactive stage is played at a second playing speed; and controlling the automatic driving vehicle to run according to a first strategy, adjusting the first playing speed and/or the second playing speed based on the vehicle speed value of the automatic driving vehicle, and reproducing the interactive behavior acquired by the acquisition vehicle to ensure that the automatic driving vehicle effectively completes the simulation training under the interactive behavior. Furthermore, the first strategy can be adjusted to the second strategy based on the training result, the first strategy, the first playing speed and the second playing speed, namely the first strategy is optimized, so that the automatic driving vehicle can be ensured to cope with various interactive behaviors and run safely.
The third aspect of the present disclosure also provides a storage medium, which is a computer-readable medium storing a computer program, and when the computer program is executed by a processor, the computer program implements the method provided in any embodiment of the present disclosure, including the following steps:
s11, constructing a traffic scene based on the traffic scene parameters;
s12, generating interactive behaviors in the traffic scene, wherein the interactive behaviors comprise a preparation phase and an interactive phase;
s13, displaying the traffic scene and playing the interactive behavior, wherein the preparation phase is played at a first playing speed, and the interactive phase is played at a second playing speed
And S14, controlling the automatic driving vehicle to run according to a first strategy, and adjusting the first playing speed and/or the second playing speed based on the first strategy.
When the computer program is executed by the processor to construct the traffic scene based on the traffic scene parameters, the processor specifically executes the following steps: acquiring state information of traffic participants through an acquisition vehicle based on a predetermined mode; generating a position and/or a travel track of each of the traffic participants based on the state information; and constructing a traffic scene according to the positions and/or the driving tracks of all the traffic participants.
When the computer program is executed by the processor to acquire the state information of the traffic participants through the collection vehicle based on a predetermined mode, the predetermined mode comprises the following steps: based on a lidar approach and/or based on an image acquisition device approach.
When the computer program is executed by the processor to generate the position and/or the driving track of each traffic participant based on the state information, the following steps are specifically executed by the processor: generating the relative position and/or relative driving track of each traffic participant relative to the collection vehicle through a point cloud identification and tracking algorithm based on the laser radar mode; and/or generating a two-dimensional position and/or a two-dimensional driving track through an image recognition and tracking algorithm based on the image acquisition mode, and generating a relative position and/or a relative driving track of each traffic participant relative to the acquisition vehicle through a three-dimensional reconstruction algorithm based on the two-dimensional position and/or the two-dimensional driving track;
and determining the position and/or the travel track of each traffic participant in the world coordinate system based on the pose information and the relative position and/or the relative travel track of the collection vehicle in the world coordinate system.
The computer program is executed by a processor to generate interactive behaviors in the traffic scene, and when the interactive behaviors comprise a preparation phase and an interactive phase, interactive key objects are determined based on preset conditions; determining a first starting condition of the preparation phase; determining a second start condition for the interaction phase.
When the computer program is executed by a processor to determine an interaction critical object based on predetermined conditions, predetermined behavior in said predetermined conditions comprises at least one of: transverse movement perpendicular to the lane line, acceleration, deceleration, steering.
The computer program is executed by a processor to control the automatic driving vehicle to run according to a first strategy, and when the first playing speed and/or the second playing speed are/is adjusted based on the first strategy, the following steps are executed by the processor: acquiring a vehicle speed value of the automatic driving vehicle; and adjusting the first playing speed and/or the second playing speed based on the vehicle speed value.
The interactive behavior of the computer program executed by the processor further comprises a resume phase, the resume phase being played based on the second play speed during the playing of the interactive behavior.
The step of executing the computer program by the processor further comprises stopping playing the interactive behavior and adjusting the first policy to a second policy when a predetermined event occurs during playing the interactive behavior.
The method comprises the steps of constructing a traffic scene in a simulator by utilizing acquired traffic scene data acquired by an acquisition vehicle; generating an interactive behavior in a traffic scene, wherein the interactive behavior comprises a preparation phase and an interactive phase; displaying a traffic scene and playing an interactive behavior, wherein the preparation stage is played at a first playing speed, and the interactive stage is played at a second playing speed; and controlling the automatic driving vehicle to run according to a first strategy, adjusting the first playing speed and/or the second playing speed based on the vehicle speed value of the automatic driving vehicle, and reproducing the interactive behavior acquired by the acquisition vehicle to ensure that the automatic driving vehicle effectively completes the simulation training under the interactive behavior. Furthermore, the first strategy can be adjusted to the second strategy based on the training result, the first strategy, the first playing speed and the second playing speed, namely the first strategy is optimized, so that the automatic driving vehicle can be ensured to cope with various interactive behaviors and run safely.
The fourth aspect of the present disclosure also provides an electronic device, as shown in fig. 6, the electronic device at least includes a memory 601 and a processor 602, a computer program is stored on the memory 601, and the processor 602 implements the method provided by any embodiment of the present disclosure when executing the computer program on the memory 601. Illustratively, the method performed by the electronic device computer program is as follows:
s21, constructing a traffic scene based on the traffic scene parameters;
s22, generating interactive behaviors in the traffic scene, wherein the interactive behaviors comprise a preparation phase and an interactive phase;
s23, displaying the traffic scene and playing the interactive behavior, wherein the preparation phase is played at a first playing speed, and the interactive phase is played at a second playing speed
And S24, controlling the automatic driving vehicle to run according to a first strategy, and adjusting the first playing speed and/or the second playing speed based on the first strategy.
The processor, when executing the traffic scene construction based on the traffic scene parameters stored on the memory, further executes the following computer program: acquiring state information of traffic participants through an acquisition vehicle based on a predetermined mode; generating a position and/or a travel track of each of the traffic participants based on the state information; and constructing a traffic scene according to the positions and/or the driving tracks of all the traffic participants.
When the processor executes the state information of the traffic participants acquired by the acquisition vehicle on the basis of the predetermined mode stored in the memory, the predetermined mode comprises the following steps: based on a lidar approach and/or based on an image acquisition device approach.
The processor, when executing the computer program stored on the memory for generating the position and/or the travel path of each of the traffic participants based on the state information, further executes the following computer program: generating the relative position and/or relative driving track of each traffic participant relative to the collection vehicle through a point cloud identification and tracking algorithm based on the laser radar mode; and/or generating a two-dimensional position and/or a two-dimensional driving track through an image recognition and tracking algorithm based on the image acquisition mode, and generating a relative position and/or a relative driving track of each traffic participant relative to the acquisition vehicle through a three-dimensional reconstruction algorithm based on the two-dimensional position and/or the two-dimensional driving track;
and determining the position and/or the travel track of each traffic participant in the world coordinate system based on the pose information and the relative position and/or the relative travel track of the collection vehicle in the world coordinate system.
The processor, when executing the interactive behavior stored on the memory that is generated in the traffic scene, the interactive behavior comprising a preparation phase and an interaction phase, further executes a computer program that: determining an interaction key object based on a predetermined condition; determining a first starting condition of the preparation phase; determining a second start condition for the interaction phase.
When the processor executes the interaction key object determined based on the predetermined condition stored in the memory, the predetermined behavior in the predetermined condition includes at least one of: transverse movement perpendicular to the lane line, acceleration, deceleration, steering.
The processor, when executing the computer program stored in the memory to control the autonomous vehicle to travel according to a first strategy and adjust the first playback speed and/or the second playback speed based on the first strategy, further executes: acquiring a vehicle speed value of the automatic driving vehicle; and adjusting the first playing speed and/or the second playing speed based on the vehicle speed value.
The processor further includes a recovery phase in executing the interactive behavior stored on the memory, and the recovery phase is played based on the second playing speed during the playing of the interactive behavior.
When the processor executes the simulation method stored in the memory, the method further comprises the following steps: and when a predetermined event occurs in the process of playing the interactive behavior, stopping playing the interactive behavior, and adjusting the first strategy into a second strategy.
The method comprises the steps of constructing a traffic scene in a simulator by utilizing acquired traffic scene data acquired by an acquisition vehicle; generating an interactive behavior in a traffic scene, wherein the interactive behavior comprises a preparation phase and an interactive phase; displaying a traffic scene and playing an interactive behavior, wherein the preparation stage is played at a first playing speed, and the interactive stage is played at a second playing speed; and controlling the automatic driving vehicle to run according to a first strategy, adjusting the first playing speed and/or the second playing speed based on the vehicle speed value of the automatic driving vehicle, and reproducing the interactive behavior acquired by the acquisition vehicle to ensure that the automatic driving vehicle effectively completes the simulation training under the interactive behavior. Furthermore, the first strategy can be adjusted to the second strategy based on the training result, the first strategy, the first playing speed and the second playing speed, namely the first strategy is optimized, so that the automatic driving vehicle can be ensured to cope with various interactive behaviors and run safely.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The storage medium may be included in the electronic device; or may exist separately without being assembled into the electronic device.
The storage medium carries one or more programs that, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the storage medium carries one or more programs that, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It should be noted that the storage media described above in this disclosure can be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any storage medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a storage medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in this disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
While the present disclosure has been described in detail with reference to the embodiments, the present disclosure is not limited to the specific embodiments, and those skilled in the art can make various modifications and alterations based on the concept of the present disclosure, and the modifications and alterations should fall within the scope of the present disclosure as claimed.

Claims (12)

1. A simulation method for simulating a scene, comprising:
constructing a traffic scene based on the traffic scene parameters;
generating an interactive behavior in the traffic scene, the interactive behavior comprising a preparation phase and an interactive phase;
displaying the traffic scene and playing the interactive behavior, wherein the preparation stage is played at a first playing speed, and the interactive stage is played at a second playing speed;
and controlling the automatic driving vehicle to run according to a first strategy, and adjusting the first playing speed and/or the second playing speed based on the first strategy.
2. The simulation method of claim 1, wherein the constructing a traffic scene based on traffic scene parameters comprises:
acquiring state information of traffic participants through an acquisition vehicle based on a predetermined mode;
generating a position and/or a travel track of each of the traffic participants based on the state information;
and constructing a traffic scene according to the positions and/or the driving tracks of all the traffic participants.
3. The simulation method according to claim 2, wherein the predetermined manner comprises: based on a lidar approach and/or based on an image acquisition device approach.
4. The simulation method according to claim 3, wherein the generating a position and/or a travel trajectory of each of the traffic participants based on the status information comprises:
generating the relative position and/or relative driving track of each traffic participant relative to the collection vehicle through a point cloud identification and tracking algorithm based on the laser radar mode; and/or the presence of a gas in the gas,
generating a two-dimensional position and/or a two-dimensional driving track through an image recognition and tracking algorithm based on the image acquisition mode, and generating a relative position and/or a relative driving track of each traffic participant relative to the acquisition vehicle through a three-dimensional reconstruction algorithm based on the two-dimensional position and/or the two-dimensional driving track;
and determining the position and/or the travel track of each traffic participant in the world coordinate system based on the pose information and the relative position and/or the relative travel track of the collection vehicle in the world coordinate system.
5. The simulation method of claim 1, wherein generating an interactive behavior in the traffic scene, the interactive behavior comprising a preparation phase and an interactive phase, comprises:
determining an interaction key object based on a predetermined condition;
determining a first starting condition of the preparation phase;
determining a second start condition for the interaction phase.
6. The simulation method of claim 5, wherein the predetermined behavior in the predetermined condition comprises at least one of:
transverse movement perpendicular to the lane line, acceleration, deceleration, steering.
7. The simulation method of claim 1, wherein controlling the autonomous vehicle to travel according to a first strategy, adjusting the first playback speed and/or the second playback speed based on the first strategy comprises:
acquiring a vehicle speed value of the automatic driving vehicle;
and adjusting the first playing speed and/or the second playing speed based on the vehicle speed value.
8. The simulation method according to claim 1, wherein the interactive behavior further comprises a recovery phase, and during the playing of the interactive behavior, the recovery phase is played based on the second playing speed.
9. The simulation method according to any one of claims 1 to 8, further comprising:
and when a predetermined event occurs in the process of playing the interactive behavior, stopping playing the interactive behavior, and adjusting the first strategy into a second strategy.
10. An emulation apparatus for simulating a scene, comprising:
the construction module is used for constructing a traffic scene based on the traffic scene parameters;
a generation module for generating an interactive behavior in the traffic scene, the interactive behavior comprising a preparation phase and an interactive phase;
the playing module is used for displaying the traffic scene and playing the interactive behavior, wherein the preparation stage is played at a first playing speed, and the interactive stage is played at a second playing speed;
and the adjusting module is used for controlling the automatic driving vehicle to run according to a first strategy and adjusting the first playing speed and/or the second playing speed based on the first strategy.
11. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, performs the steps of the simulation method of a simulated scene according to any one of claims 1 to 9.
12. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the method of simulating a scene according to any one of claims 1 to 9.
CN202010082737.8A 2020-02-07 2020-02-07 Simulation method and device for simulation scene, storage medium and electronic equipment Pending CN111816022A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010082737.8A CN111816022A (en) 2020-02-07 2020-02-07 Simulation method and device for simulation scene, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010082737.8A CN111816022A (en) 2020-02-07 2020-02-07 Simulation method and device for simulation scene, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN111816022A true CN111816022A (en) 2020-10-23

Family

ID=72847740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010082737.8A Pending CN111816022A (en) 2020-02-07 2020-02-07 Simulation method and device for simulation scene, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111816022A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489522A (en) * 2020-11-17 2021-03-12 北京三快在线科技有限公司 Method, device, medium and electronic device for playing simulation scene data
CN114694449A (en) * 2020-12-25 2022-07-01 华为技术有限公司 Method and device for generating vehicle traffic scene, training method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11123279A (en) * 1997-10-21 1999-05-11 Calsonic Corp Storage medium storing auxiliary program for control of running in racing game using computer
WO2013168470A1 (en) * 2012-05-10 2013-11-14 株式会社セガ Gaming device and recording medium
CN105279999A (en) * 2014-05-28 2016-01-27 中国电信股份有限公司 Method, terminal, server and system for processing LED broadcast information
CN106448336A (en) * 2016-12-27 2017-02-22 郑州爱普锐科技有限公司 Railway locomotive simulative operation training system and method thereof
CN108230817A (en) * 2017-11-30 2018-06-29 商汤集团有限公司 Vehicle drive analogy method and device, electronic equipment, system, program and medium
CN108334055A (en) * 2018-01-30 2018-07-27 赵兴华 The method of inspection, device, equipment and the storage medium of Vehicular automatic driving algorithm
CN108458880A (en) * 2018-01-29 2018-08-28 上海测迅汽车科技有限公司 The unmanned controlled scrnario testing method of vehicle
CN109085837A (en) * 2018-08-30 2018-12-25 百度在线网络技术(北京)有限公司 Control method for vehicle, device, computer equipment and storage medium
US10386986B1 (en) * 2005-01-14 2019-08-20 Google Llc Providing an interactive presentation environment
WO2019231522A1 (en) * 2018-05-31 2019-12-05 Nissan North America, Inc. Time-warping for autonomous driving simulation

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11123279A (en) * 1997-10-21 1999-05-11 Calsonic Corp Storage medium storing auxiliary program for control of running in racing game using computer
US10386986B1 (en) * 2005-01-14 2019-08-20 Google Llc Providing an interactive presentation environment
WO2013168470A1 (en) * 2012-05-10 2013-11-14 株式会社セガ Gaming device and recording medium
CN105279999A (en) * 2014-05-28 2016-01-27 中国电信股份有限公司 Method, terminal, server and system for processing LED broadcast information
CN106448336A (en) * 2016-12-27 2017-02-22 郑州爱普锐科技有限公司 Railway locomotive simulative operation training system and method thereof
CN108230817A (en) * 2017-11-30 2018-06-29 商汤集团有限公司 Vehicle drive analogy method and device, electronic equipment, system, program and medium
CN108458880A (en) * 2018-01-29 2018-08-28 上海测迅汽车科技有限公司 The unmanned controlled scrnario testing method of vehicle
CN108334055A (en) * 2018-01-30 2018-07-27 赵兴华 The method of inspection, device, equipment and the storage medium of Vehicular automatic driving algorithm
WO2019231522A1 (en) * 2018-05-31 2019-12-05 Nissan North America, Inc. Time-warping for autonomous driving simulation
CN109085837A (en) * 2018-08-30 2018-12-25 百度在线网络技术(北京)有限公司 Control method for vehicle, device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱冰等: "基于场景的自动驾驶汽车虚拟测试研究进展", 《中国公路学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489522A (en) * 2020-11-17 2021-03-12 北京三快在线科技有限公司 Method, device, medium and electronic device for playing simulation scene data
CN114694449A (en) * 2020-12-25 2022-07-01 华为技术有限公司 Method and device for generating vehicle traffic scene, training method and device

Similar Documents

Publication Publication Date Title
US20230376037A1 (en) Autonomous vehicle simulation system for analyzing motion planners
JP6548691B2 (en) Image generation system, program and method, simulation system, program and method
RU2725920C1 (en) Control of autonomous vehicle operational control
CN112789619B (en) Simulation scene construction method, simulation method and device
WO2022105394A1 (en) Simulation method and system, device, readable storage medium, and platform for autonomous driving
CN110795813A (en) Traffic simulation method and device
CN108399752A (en) A kind of driving infractions pre-judging method, device, server and medium
CN110188482B (en) Test scene creating method and device based on intelligent driving
US11282164B2 (en) Depth-guided video inpainting for autonomous driving
CN106791613A (en) A kind of intelligent monitor system being combined based on 3DGIS and video
CN111752258A (en) Operation test of autonomous vehicle
WO2018066352A1 (en) Image generation system, program and method, and simulation system, program and method
Gruyer et al. From virtual to reality, how to prototype, test and evaluate new ADAS: Application to automatic car parking
CN109376664A (en) Machine learning training method, device, server and medium
CN114492022A (en) Road condition sensing data processing method, device, equipment, program and storage medium
CN111816022A (en) Simulation method and device for simulation scene, storage medium and electronic equipment
CN112860575A (en) Traffic environment fusion perception in-loop automatic driving algorithm simulation test method
CN116403174A (en) End-to-end automatic driving method, system, simulation system and storage medium
Shi et al. An integrated traffic and vehicle co-simulation testing framework for connected and autonomous vehicles
CN111142402A (en) Simulation scene construction method and device and terminal
Liu et al. A survey on autonomous driving datasets: Data statistic, annotation, and outlook
Guvenc et al. Simulation Environment for Safety Assessment of CEAV Deployment in Linden
Liu et al. A survey on autonomous driving datasets: Statistics, annotation quality, and a future outlook
CN112230632B (en) Method, apparatus, device and storage medium for automatic driving
CN111881121A (en) Automatic driving data filling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201023

RJ01 Rejection of invention patent application after publication