CN112163280A - Method, device and equipment for simulating automatic driving scene and storage medium - Google Patents

Method, device and equipment for simulating automatic driving scene and storage medium Download PDF

Info

Publication number
CN112163280A
CN112163280A CN202011171275.3A CN202011171275A CN112163280A CN 112163280 A CN112163280 A CN 112163280A CN 202011171275 A CN202011171275 A CN 202011171275A CN 112163280 A CN112163280 A CN 112163280A
Authority
CN
China
Prior art keywords
automatic driving
particle
distance
determining
driving scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011171275.3A
Other languages
Chinese (zh)
Other versions
CN112163280B (en
Inventor
宋科科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011171275.3A priority Critical patent/CN112163280B/en
Publication of CN112163280A publication Critical patent/CN112163280A/en
Application granted granted Critical
Publication of CN112163280B publication Critical patent/CN112163280B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/25Design optimisation, verification or simulation using particle-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/10Numerical modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/02Reliability analysis or reliability optimisation; Failure analysis, e.g. worst case scenario performance, failure mode and effects analysis [FMEA]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for simulating an automatic driving scene. Obtaining a target ray emitted by an automatic driving object; then determining the reflection probability of the target ray relative to the barrier particles; determining shielding information based on the numerical relation between the reflection probability and the first random number; when the shielding information indicates that the target ray is shielded by the barrier particle, determining a first distance between the automatic driving object and the barrier particle based on a second random number; and determining the position of the obstacle particle in the automatic driving scene according to the first distance so as to simulate the weather state. Therefore, the dynamic simulation process of the automatic driving scene is realized, and the accuracy of simulating the weather state in the automatic driving scene is improved because the set position of the barrier particles is dynamically set by the target ray emitted by the automatic driving object.

Description

Method, device and equipment for simulating automatic driving scene and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for simulating an automatic driving scene.
Background
The automatic driving technology comprises the technologies of high-precision maps, environment perception, behavior decision, path planning, motion control and the like, and has wide application prospects. In the process of testing the automatic driving, the credibility of the automatic driving simulation result is closely related to whether the simulation environment is close to the real environment or not.
The process of autodrive simulation involves the simulation of a road scene, and weather conditions often need to be considered in the simulation process of the road scene to improve the reality of the simulated scene. For the simulation process of the weather state, a random noise model can be adopted, for example, for the simulation process of the snowflakes, the random noise model is adopted to enable the snowflakes to be uniformly distributed in the automatic driving scene.
However, the snowflakes are not completely uniformly distributed in the actual scene, so that the simulation scene adopting the random noise model has a large difference from the actual scene, and the accuracy of the simulation of the weather state in the automatic driving scene is influenced.
Disclosure of Invention
In view of this, the present application provides a simulation method for an automatic driving scene, which can effectively improve the accuracy of the simulation of the weather state in the automatic driving scene.
A first aspect of the present application provides a method for simulating an automatic driving scenario, which may be applied to a system or a program that includes a simulation function of an automatic driving scenario in a terminal device, and specifically includes:
acquiring a target ray emitted by an automatic driving object in an automatic driving scene;
determining a reflection probability of the target ray relative to an obstacle particle, the reflection probability being related to a distance the target ray traveled in the autonomous driving scene, the obstacle particle corresponding to a weather condition in the autonomous driving scene;
determining occlusion information based on a numerical relationship of the reflection probability and a first random number;
determining a first distance between the autonomous driving object and the barrier particle based on a second random number selected from a range of values determined based on the first random number if the occlusion information indicates that the target ray is occluded by the barrier particle, the second random number having a value less than a value of the first random number;
determining a position of the obstacle particle in the autonomous driving scenario according to the first distance to simulate the weather condition in the autonomous driving scenario.
Optionally, in some possible implementations of the present application, the determining a reflection probability of the target ray with respect to the obstacle particle includes:
determining a particle amount of the obstacle particle in the auto-driving scene;
inputting the particle quantity into a reflection probability model to obtain the reflection probability, wherein the reflection probability model is set based on a second distance from the target ray to an obstacle in the automatic driving scene.
Optionally, in some possible implementations of the present application, the inputting the particle quantity into a reflection probability model to obtain the reflection probability includes:
acquiring a calibration reference object in the automatic driving scene;
determining a calibration coefficient of the reflection probability model based on the calibration reference;
determining the reflection probability model according to the calibration coefficient;
and inputting the particle quantity into the reflection probability model to obtain the reflection probability.
Optionally, in some possible implementations of the present application, the method further includes:
determining a corresponding target distance when the reflection probability is a target value based on the reflection probability model;
and screening the second distance according to the target distance.
Optionally, in some possible implementations of the present application, the method further includes:
acquiring position information of the automatic driving object;
determining a road environment model of the autonomous driving object in the autonomous driving scene;
inputting the position information into the road surface environment model to obtain a particle variation amount, the road surface environment model being set based on an accumulated amount of the obstacle particles in the automatic driving scene;
and updating the particle quantity according to the particle variation.
Optionally, in some possible implementations of the present application, the determining a road environment model of the automatic driving object in the automatic driving scene includes:
determining a target component of the autonomous driving object;
and respectively calling corresponding road surface environment models based on the target component, wherein the road surface environment models corresponding to the target component have different calibration parameters.
Optionally, in some possible implementations of the present application, the method further includes:
determining a direction of travel of the autonomous driving object in the autonomous driving scenario;
and adjusting the calibration parameters of the road environment model based on the driving direction so as to match the target component.
Optionally, in some possible implementations of the present application, the method further includes:
acquiring environmental parameters in the automatic driving scene;
and calibrating the road surface environment model based on the environment parameters so as to update the calibration parameters.
Optionally, in some possible implementations of the present application, the method further includes:
acquiring congestion information of the automatic driving object in the automatic driving scene;
adjusting the environmental parameter based on the congestion information.
Optionally, in some possible implementations of the present application, the determining a first distance between the autonomous driving object and the obstacle particle based on a second random number includes:
calling a distance model corresponding to the automatic driving scene;
inputting the second random number into the distance model to determine a first distance between the autonomous driving object and the obstacle particle.
Optionally, in some possible implementations of the present application, the method further includes:
if the shielding information indicates that the target ray is not shielded by the barrier particles, detecting the return time of the target ray;
determining a distance between the autonomous driving object and an obstacle based on the return time.
Optionally, in some possible implementations of the present application, the target ray is obtained by a laser radar, the obstacle particle is a snowflake, and the weather condition is snowing weather.
A second aspect of the present application provides an automatic driving scene simulation apparatus, including: the system comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring a target ray emitted by an automatic driving object in an automatic driving scene;
a determining unit configured to determine a reflection probability of the target ray with respect to an obstacle particle, the reflection probability being related to a distance traveled by the target ray in the autonomous driving scene, the obstacle particle corresponding to a weather condition in the autonomous driving scene;
the determining unit is further configured to determine occlusion information based on a numerical relationship between the reflection probability and a first random number;
a calculation unit configured to determine a first distance between the autonomous driving object and the obstacle particle based on a second random number selected from a range of values determined based on the first random number if the occlusion information indicates that the target ray is occluded by the obstacle particle, the second random number having a value smaller than a value of the first random number;
a simulation unit, configured to determine a position of the obstacle particle in the auto-driving scene according to the first distance, so as to simulate the weather condition in the auto-driving scene.
Optionally, in some possible implementations of the present application, the determining unit is specifically configured to determine a particle amount of the obstacle particle in the automatic driving scene;
the determining unit is specifically configured to input the particle size into a reflection probability model to obtain the reflection probability, and the reflection probability model is set based on a second distance from the target ray to an obstacle in the automatic driving scene.
Optionally, in some possible implementations of the present application, the determining unit is specifically configured to obtain a calibration reference object in the automatic driving scene;
the determining unit is specifically configured to determine a calibration coefficient of the reflection probability model based on the calibration reference;
the determining unit is specifically configured to determine the reflection probability model according to the calibration coefficient;
the determining unit is specifically configured to input the particle amount into the reflection probability model to obtain the reflection probability.
Optionally, in some possible implementation manners of the present application, the determining unit is specifically configured to determine, based on the reflection probability model, a corresponding target distance when the reflection probability is a target value;
the determining unit is specifically configured to screen the second distance according to the target distance.
Optionally, in some possible implementations of the present application, the determining unit is specifically configured to obtain position information of the automatic driving object;
the determining unit is specifically configured to determine a road environment model of the automatic driving object in the automatic driving scene;
the determination unit is specifically configured to input the position information into the road surface environment model to obtain a particle variation amount, where the road surface environment model is set based on an accumulated amount of the obstacle particles in the automatic driving scene;
the determining unit is specifically configured to update the particle amount according to the particle variation.
Optionally, in some possible implementations of the present application, the determining unit is specifically configured to determine a target component of the automatic driving object;
the determining unit is specifically configured to call corresponding road surface environment models respectively based on the target component, where calibration parameters of the road surface environment models corresponding to the target component are different.
Optionally, in some possible implementations of the present application, the determining unit is specifically configured to determine a driving direction of the automatic driving object in the automatic driving scene;
the determining unit is specifically configured to adjust a calibration parameter of the road environment model based on the driving direction to match the target component.
Optionally, in some possible implementations of the present application, the determining unit is specifically configured to acquire an environmental parameter in the automatic driving scene;
the determining unit is specifically configured to calibrate the road surface environment model based on the environment parameter, so as to update the calibration parameter.
Optionally, in some possible implementation manners of the present application, the determining unit is specifically configured to obtain congestion information of the autonomous driving object in the autonomous driving scene;
the determining unit is specifically configured to adjust the environmental parameter based on the congestion information.
Optionally, in some possible implementation manners of the present application, the calculating unit is specifically configured to invoke a distance model corresponding to the automatic driving scene;
the calculation unit is specifically configured to input the second random number into the distance model to determine a first distance between the autonomous driving object and the obstacle particle.
Optionally, in some possible implementation manners of the present application, the simulation unit is specifically configured to detect a return time of the target ray if the occlusion information indicates that the target ray is not occluded by the obstacle particle;
the simulation unit is specifically configured to determine a distance between the autonomous driving object and an obstacle based on the return time.
A third aspect of the present application provides a computer device comprising: a memory, a processor, and a bus system; the memory is used for storing program codes; the processor is configured to execute the simulation method of the automatic driving scenario of the first aspect or any one of the first aspects according to instructions in the program code.
A fourth aspect of the present application provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to execute the method for simulating an automated driving scenario of any one of the first aspect or the first aspect.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method for simulating an autopilot scenario provided in the first aspect or in various alternative implementations of the first aspect.
According to the technical scheme, the embodiment of the application has the following advantages:
acquiring a target ray emitted by an automatic driving object in an automatic driving scene; then determining the reflection probability of the target ray relative to the barrier particles, wherein the reflection probability is related to the distance of the target ray in the automatic driving scene, and the barrier particles correspond to the weather state in the automatic driving scene; determining shielding information based on the numerical relation between the reflection probability and the first random number; when the shielding information indicates that the target ray is shielded by the barrier particle, determining a first distance between the automatic driving object and the barrier particle based on a second random number, wherein the second random number is obtained based on a numerical range determined by the first random number, and the numerical value of the second random number is smaller than that of the first random number; the position of the obstacle particle in the automatic driving scene is further determined according to the first distance so as to simulate the weather state in the automatic driving scene. Therefore, the dynamic simulation process of the automatic driving scene is realized, and the set position of the barrier particles is dynamically set by the target ray emitted by the automatic driving object, so that the barrier particles can be accurately blended into the automatic driving scene, and the accuracy of simulating the weather state in the automatic driving scene is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a diagram of a network architecture for simulating system operation in an autopilot scenario;
fig. 2 is a flowchart of a simulation of an automatic driving scene according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a simulation method for an automatic driving scene according to an embodiment of the present disclosure;
fig. 4 is a scene schematic diagram of a simulation method of an automatic driving scene according to an embodiment of the present application;
fig. 5 is a scene schematic diagram of another method for simulating an automatic driving scene according to an embodiment of the present application;
fig. 6 is a scene flow diagram of a simulation method of an automatic driving scene according to an embodiment of the present disclosure;
FIG. 7 is a flow chart of another method for simulating an autopilot scenario provided by an embodiment of the present application;
fig. 8 is a scene flow diagram illustrating another method for simulating an automatic driving scene according to an embodiment of the present disclosure;
fig. 9 is a scene schematic diagram of another method for simulating an automatic driving scene according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a simulation apparatus for an automatic driving scene according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a method, a device, equipment and a storage medium for simulating an automatic driving scene, which can be applied to a system or a program containing a simulation function of the automatic driving scene in terminal equipment, and can obtain a target ray emitted by an automatic driving object in the automatic driving scene; then determining the reflection probability of the target ray relative to the barrier particles, wherein the reflection probability is related to the distance of the target ray in the automatic driving scene, and the barrier particles correspond to the weather state in the automatic driving scene; determining shielding information based on the numerical relation between the reflection probability and the first random number; when the shielding information indicates that the target ray is shielded by the barrier particle, determining a first distance between the automatic driving object and the barrier particle based on a second random number, wherein the second random number is obtained based on a numerical range determined by the first random number, and the numerical value of the second random number is smaller than that of the first random number; the position of the obstacle particle in the automatic driving scene is further determined according to the first distance so as to simulate the weather state in the automatic driving scene. Therefore, the dynamic simulation process of the automatic driving scene is realized, and the set position of the barrier particles is dynamically set by the target ray emitted by the automatic driving object, so that the barrier particles can be accurately blended into the automatic driving scene, and the accuracy of simulating the weather state in the automatic driving scene is improved.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "corresponding" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some nouns that may appear in the embodiments of the present application are explained.
Laser radar: the remote sensing and military computing radar has the advantages that the remote sensing, military and vehicle-mounted applications can be realized, the transmitting power of the remote sensing and military computing radar is far larger than that of a vehicle-mounted laser radar, and physical properties such as the distance and the shape of a long-distance target can be identified. The adjustable parameters of remote sensing and military laser radar are numerous, the simulation model of the laser radar covers a plurality of physical parameters of the laser radar, such as wavelength, pulse width, energy, light beam size, atmospheric transmission, target physical properties and the like, and the factors of weather interference are few. The detection distance of the vehicle-mounted laser radar is only 200-500 m, and the recognizable physical properties are only distance and reflectivity, so that the vehicle-mounted laser radar is generally used for small machines such as vehicles and robots.
Vehicle laser radar: by emitting a light beam with a wavelength of about 900nm and returning after colliding with an obstacle, the processing unit calculates the distance of the obstacle from the return time difference and estimates the reflectivity of the target from the cross-sectional condition of the return light beam. The vehicle-mounted laser radar is easy to be influenced by the weather environment due to small volume, high integration degree and almost no open parameters.
It should be understood that the method for simulating an automatic driving scene provided by the present application may be applied to a system or a program including a simulation function of an automatic driving scene in a terminal device, such as automatic driving simulation software, specifically, the simulation system of an automatic driving scene may run in a network architecture as shown in fig. 1, which is a network architecture diagram run by the simulation system of an automatic driving scene as shown in fig. 1, and as can be seen from the diagram, the simulation system of an automatic driving scene may provide a simulation process of an automatic driving scene with multiple information sources, that is, a simulation and rendering of an automatic driving scene are performed through a server side, and the simulation process is sent to a terminal side for display, so that a user performs a related operation in the automatic driving scene; it can be understood that fig. 1 shows various terminal devices, the terminal devices may be computer devices, in an actual scene, there may be more or fewer types of terminal devices participating in the process of simulating the automatic driving scene, the specific number and type are determined by the actual scene, and are not limited herein, in addition, fig. 1 shows one server, but in an actual scene, there may also be participation of multiple servers, and the specific number of servers is determined by the actual scene.
In this embodiment, the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through a wired or wireless communication manner, and the terminal and the server may be connected to form a block chain network, which is not limited herein.
It is understood that the simulation system of the automatic driving scene can be operated in a personal mobile terminal, for example: the application of the automatic driving simulation software can be operated in a server, and can also be operated in a third-party device to provide simulation of an automatic driving scene so as to obtain a simulation processing result of the automatic driving scene of the information source; the specific automatic driving scene simulation system may be operated in the above-mentioned device in the form of a program, may also be operated as a system component in the above-mentioned device, and may also be used as one of cloud service programs, and the specific operation mode is determined by the actual scene, and is not limited herein.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Among the artificial intelligence techniques, the development of the automatic driving technique is particularly rapid. The automatic driving technology comprises the technologies of high-precision maps, environment perception, behavior decision, path planning, motion control and the like, and has wide application prospects. In the process of testing the automatic driving, the credibility of the automatic driving simulation result is closely related to whether the simulation environment is close to the real environment or not.
The process of autodrive simulation involves the simulation of a road scene, and weather conditions often need to be considered in the simulation process of the road scene to improve the reality of the simulated scene. For the simulation process of the weather state, a random noise model can be adopted, for example, for the simulation process of the snowflakes, the random noise model is adopted to enable the snowflakes to be uniformly distributed in the automatic driving scene.
However, the snowflakes are not completely uniformly distributed in the actual scene, so that the simulation scene adopting the random noise model has a large difference from the actual scene, and the accuracy of the simulation of the weather state in the automatic driving scene is influenced.
In order to solve the above problems, the present application provides a method for simulating an automatic driving scene, which is applied to a process framework of simulation of an automatic driving scene shown in fig. 2, and as shown in fig. 2, for a process framework of simulation of an automatic driving scene provided in an embodiment of the present application, a laser beam is emitted by a laser radar of an automatic driving object, the laser beam propagates along a straight line, and whether the laser beam is blocked by an obstacle is determined; when the laser radar returns to an obstacle (a road surface, a tree or a pole), the distance of the obstacle is calculated by the laser radar according to the return time difference; or is shielded by the barrier particles between the laser beam and the barrier, for example, in a snowflake environment, the barrier particles are snowflakes, the laser is shielded by the snowflakes with a certain probability in the process of propagating to the barrier, and due to the fact that the reflectivity of the snowflakes is very high, the laser is easily shielded by the snowflakes in the process of propagating, so that the laser is returned in advance, and the weather state corresponding to the snowflakes can be simulated according to the returned data.
It can be understood that the method provided by the present application may be a program written as a processing logic in a hardware system, or may be an automatic driving scene simulation device, and the processing logic is implemented in an integrated or external manner. As one implementation manner, the simulation device of the automatic driving scene obtains a target ray emitted by an automatic driving object in the automatic driving scene; then determining the reflection probability of the target ray relative to the barrier particles, wherein the reflection probability is related to the distance of the target ray in the automatic driving scene, and the barrier particles correspond to the weather state in the automatic driving scene; determining shielding information based on the numerical relation between the reflection probability and the first random number; when the shielding information indicates that the target ray is shielded by the barrier particle, determining a first distance between the automatic driving object and the barrier particle based on a second random number, wherein the second random number is obtained based on a numerical range determined by the first random number, and the numerical value of the second random number is smaller than that of the first random number; the position of the obstacle particle in the automatic driving scene is further determined according to the first distance so as to simulate the weather state in the automatic driving scene. Therefore, the dynamic simulation process of the automatic driving scene is realized, and the set position of the barrier particles is dynamically set by the target ray emitted by the automatic driving object, so that the barrier particles can be accurately blended into the automatic driving scene, and the accuracy of simulating the weather state in the automatic driving scene is improved.
The scheme provided by the embodiment of the application relates to an artificial intelligent automatic driving technology, and is specifically explained by the following embodiment:
with reference to the above flow architecture, the following describes a simulation method of an automatic driving scenario in the present application, please refer to fig. 3, where fig. 3 is a flow chart of a simulation method of an automatic driving scenario provided in an embodiment of the present application, where the management method may be executed by a terminal device, or by a server, or by both the terminal device and the server, and the following describes an embodiment of the automatic driving scenario executed by the terminal device, where the embodiment of the present application at least includes the following steps:
301. and acquiring a target ray emitted by an automatic driving object in an automatic driving scene.
In this embodiment, the automatic driving object may be a vehicle, a motorcycle, or some movable intelligent devices, and the vehicle is taken as an example for description here.
It can be understood that the automatic driving scene is a scene in the automatic driving simulation software, and the target ray can be emitted by the automatic driving object simulation in the simulation software, that is, the laser beam is emitted to each direction in the scene, so that the simulation of the relevant elements in the scene is performed according to the reflection of the laser beam, and the simulation process of the corresponding relevant elements is also performed dynamically due to the continuous change of the virtual elements in the automatic driving scene, such as the process of vehicle movement.
In a possible scenario, as shown in fig. 4, fig. 4 is a scenario diagram of a simulation method of an automatic driving scenario provided in an embodiment of the present application; the automatic driving object a1 is shown in the figure, and the laser beam is emitted to the surroundings through the automatic driving object a1, so that the laser beam intersects with an obstacle (road surface a2, tree A3, signboard a4, etc.) or the laser beam intersects with obstacle particle a5 and returns in advance, and based on the above dynamic simulation process, the simulation process of the fault obstacle particle a5 is also dynamically generated, and can be well integrated with the automatic driving scene.
302. The probability of reflection of the target ray relative to the barrier particle is determined.
In this embodiment, the reflection probability is related to a distance traveled by the target ray in the autonomous driving scene, and the barrier particle corresponds to a weather state in the autonomous driving scene; the obstacle particles and the weather conditions corresponding to the obstacle particles may be snowflakes (snowing weather), dust (sand storm weather), fallen leaves (strong wind weather), and the like, for example, because the reflectivity of the snowflakes is high, the laser is easily shielded by the snowflakes in the transmission process, and is returned in advance, so that the obstacle particles with high reflectivity and the weather conditions corresponding to the obstacle particles are all included in the application.
It can be understood that the distance traveled by the laser in the automatic driving scenario is purely linked to the reflection probability, as shown in fig. 5, fig. 5 is a scenario diagram of another simulation method for the automatic driving scenario provided in the embodiment of the present application; taking the barrier particles as snowflakes as an example, as shown in (1) in fig. 5, since the laser emits light from the center outward, the longer the distance, the more dispersed the laser is, and the greater the probability of encountering snowflakes. The snowflake distribution is uniform no matter how far or how close the snowflake is, and the laser is dispersed; therefore, the probability of the laser emission process is cumulative, and the probability distribution of the single laser is derived according to the space volume, as shown in (2) of fig. 5, and the probability distribution conforms to the power distribution.
Specifically, the determination process of the reflection probability may be performed based on a reflection probability model. Firstly, determining the particle quantity of barrier particles in an automatic driving scene; the particle quantities are then input into a reflection probability model to obtain a reflection probability, wherein the reflection probability model is set based on a second distance of the target ray to an obstacle in the autonomous driving scene. In one possible scenario, the reflection probability model may be expressed as:
I=a×Db×S
wherein I is the probability of being returned by an obstacle particle; a. b is a calibration coefficient; d is the obstacle distance, i.e. the second distance; s is the particle amount.
For the formula, because the laser radars are set differently in different automatic driving scenes, a and b need to be calibrated, namely, calibration reference objects in the automatic driving scenes are obtained; then determining a calibration coefficient of the reflection probability model based on a calibration reference object; determining a reflection probability model according to the calibration coefficient; and then inputting the particle quantity into a reflection probability model to obtain the reflection probability. Therefore, the matching degree of the reflection probability model and the automatic driving scene is ensured, and the accuracy of the reflection probability calculation is improved.
Optionally, because the laser radar has the maximum detection distance or the possible detection range, the second detected distance may be screened, that is, the corresponding target distance when the reflection probability is the target value is determined based on the reflection probability model; and then screening the second distance according to the target distance. For example, when the distance I is 1, the maximum detection distance of the laser radar is determined, and data with the distance greater than the maximum detection distance is truncated, so that the accuracy of the calculation of the reflection probability is ensured.
303. And determining occlusion information based on the numerical relationship of the reflection probability and the first random number.
In this embodiment, the first random number I1 is a random number used for scene simulation to automatically generate obstacle particles at different positions. Namely, the simulation software calculates the probability of being blocked by the snowflake according to the snowfall amount, and then judges whether the ray is returned by the snowflake in advance or an obstacle is successfully detected according to the relation (blocking information) between the first random number I1 and the blocking probability. And if the shielding information is that the reflection probability is less than or equal to the first random number, the target ray is not shielded by the barrier particles.
304. If the occlusion information indicates that the target ray is occluded by the obstacle particle, a first distance between the autonomous driving object and the obstacle particle is determined based on the second random number.
In the present embodiment, the second random number I2(0-1) is obtained based on the range of values determined by the first random number I1, and the value of the second random number I1 is smaller than that of the first random number; for example, I1 is a random number in a numerical range of 0-I2, so that the calculated distance is smaller than the distance of the obstacle, the value of I2 is smaller than the value of I1, and the effect of complex scene simulation is achieved by a small number of random numbers.
Specifically, after the second random number is randomly obtained, a distance model corresponding to the automatic driving scene can be called; a second random number is then input to the distance model to determine a first distance between the autonomous driving object and the obstacle particle.
In one possible scenario, the distance model may be represented as:
Figure BDA0002747385900000141
wherein I is the probability of being returned by an obstacle particle; a. b is a calibration coefficient; d is the barrier particle distance, i.e. the first distance; s is the particle size; namely, the distance model is obtained by deforming the reflection probability model, and the obtained parameters in the reflection probability model are adopted. The distance can be calculated reversely according to the probability through the formula, and the simulation software calculates the distance of the barrier particles by using smaller probability, so that the simulation of the barrier particles of different layers is realized.
Optionally, when the occlusion information indicates that the target ray is not occluded by the obstacle particle, the return time of the target ray is detected; and then determining the distance between the automatic driving object and the obstacle based on the return time so as to facilitate the judgment of risk avoidance of the automatic driving object.
305. The position of the obstacle particle in the automatic driving scene is determined according to the first distance to simulate a weather state in the automatic driving scene.
In this embodiment, a large number of obstacle particles may be obtained by continuously repeating the determination process of the first distance and the calculation of different random numbers, and the distribution of the obstacle particles is dynamically distributed based on the relative relationship between the autonomous driving object and the obstacle, for example, the distribution of snowflakes, so that a snowing scene may be better simulated.
Specifically, taking the simulation of snowing weather as an example, as shown in fig. 6, fig. 6 is a scene flow diagram of the simulation method of the automatic driving scene provided in the embodiment of the present application. Namely, the target ray is emitted by the laser radar, the barrier particles are snowflakes, and the weather state is snowing weather. Firstly simulating a ray, emitting outwards from the laser radar, calculating the probability of the ray being blocked by snowflakes, calculating the distance and the reflectivity of the snowflakes if the ray is blocked, and calculating the distance of an obstacle if the ray is not blocked. When the reflection probability model determines the probability of being shielded in the laser radar transmission process, wherein the reflection probability model is positively correlated with the distance, the farther the transmission distance is, the greater the probability of being shielded is. If a laser ray is judged to be a snowflake occlusion, then a distance model is used to calculate where it is occluded. For the generation process of the random number, a simulation algorithm calculates the reflection probability according to the distance of the obstacle, the reflection probability is possibly larger than 1, then a number I1 (0-1) is randomly selected, the relation between the reflection probability and the number I1 is judged, if I1 is larger than the reflection probability, the laser is not shielded, and if not, the laser is judged to be shielded. And when the distance model is judged to be shielded, randomly generating a number I2 in the range of 0-I1, and substituting the number I2 into the distance model to calculate the shielded distance. Therefore, a plurality of snowflakes distributed in different layers are generated, and the simulation of snowing weather is carried out in an automatic driving scene.
By combining the above embodiments, the target ray emitted by the automatic driving object in the automatic driving scene is acquired; then determining the reflection probability of the target ray relative to the barrier particles, wherein the reflection probability is related to the distance of the target ray in the automatic driving scene, and the barrier particles correspond to the weather state in the automatic driving scene; determining shielding information based on the numerical relation between the reflection probability and the first random number; when the shielding information indicates that the target ray is shielded by the barrier particle, determining a first distance between the automatic driving object and the barrier particle based on a second random number, wherein the second random number is obtained based on a numerical range determined by the first random number, and the numerical value of the second random number is smaller than that of the first random number; the position of the obstacle particle in the automatic driving scene is further determined according to the first distance so as to simulate the weather state in the automatic driving scene. Therefore, the dynamic simulation process of the automatic driving scene is realized, and the set position of the barrier particles is dynamically set by the target ray emitted by the automatic driving object, so that the barrier particles can be accurately blended into the automatic driving scene, and the accuracy of simulating the weather state in the automatic driving scene is improved.
In addition, the road surface and the environment also affect the simulation process in the automatic driving scene, and the scene will be described below. Referring to fig. 7, fig. 7 is a flowchart of another method for simulating an automatic driving scenario according to an embodiment of the present application, where the embodiment of the present application at least includes the following steps:
701. acquiring a target ray emitted by an automatic driving object in an automatic driving scene;
in this embodiment, relevant features of step 701 are similar to those of step 301 in the embodiment shown in fig. 3, and are not described herein again.
702. And determining the road surface environment condition in the automatic driving scene to determine the particle variation.
In this embodiment, the road environment conditions include a wind speed condition, a road congestion condition, and the like, for example, snowflakes may be raised by wind, and snowflakes may also be raised by other vehicles traveling through the road.
Specifically, the influence of the road environment on the particle variation can be represented by a road environment model, that is, the position information of the automatic driving object is obtained first; then determining a road environment model of an automatic driving object in an automatic driving scene; inputting the position information into a road environment model to obtain the particle variation, wherein the road environment model is set based on the accumulation amount of the obstacle particles in the automatic driving scene; and further updating the particle amount according to the particle variation.
In one possible scenario, snow raised by the wind (natural wind + vehicle wind) in the presence of snow increases the snow concentration. For automobiles, the concentration gradually dissipates with distance from the vehicle. Therefore, the road environment model can be illustrated by using the following formula and taking snowflake simulation as an example:
Figure BDA0002747385900000161
wherein S' is the snowfall increase value, t and p are calibration parameters, and J is the snowfall D as the position to the vehicle. The road surface environment model is further supplemented by snow shielding, simulates the phenomenon that the concentration of surrounding snow becomes large in the process of vehicle running, and describes the relationship between the snow accumulation amount and the snow fall amount enhancement value.
Optionally, the snow concentration of the vehicle head and the vehicle tail are inconsistent, so the vehicle head and the vehicle tail should be calibrated respectively. So the target part (head, tail or other parts) of the automatic driving object is determined firstly; and then, corresponding road surface environment models are respectively called based on the target component, and the calibration parameters of the road surface environment models corresponding to the target component are different, so that the accuracy of the simulation process is ensured.
In addition, because the concentration simulation of the head and the tail of the vehicle is related to the driving direction, namely the snowflake concentration influence is different between the backing form and the normal driving, the driving direction of the automatic driving object in the automatic driving scene can be determined; and then, the calibration parameters of the road environment model are adjusted based on the driving direction to match with the target component, so that the accuracy of the simulation process is further improved.
It can be understood that, for the calibration parameters in the above formula, it may be corresponding to an automatic driving scenario, so that the environmental parameters (e.g. wind speed information) in the automatic driving scenario may be obtained first; and calibrating the road environment model based on the environment parameters so as to update the calibration parameters.
Optionally, because the congestion condition of the vehicle may also affect the snowflake concentration, congestion information of an automatic driving object in an automatic driving scene may also be obtained; the environmental parameter is adjusted based on the congestion information. For example, when the congestion information indicates congestion, the vehicle runs slowly, and the snow concentration needs to be increased, so that the accuracy of simulation is ensured.
Compared with the prior art that a random noise model is directly adopted, a simulation result contains a large number of random values, the integration degree of barrier particle distribution and scenes is guaranteed through the access of a reflection probability model, a distance model and a road environment model and the hierarchical distribution of I1 and I2, and the simulation accuracy is improved.
703. The reflection probability of the target ray with respect to the barrier particle is determined based on the updated particle amount.
In this embodiment, S' in step 702 is finally used as an added value of S in the reflection probability model, and is substituted and calculated, so as to obtain the reflection probability.
704. And determining occlusion information based on the numerical relationship of the reflection probability and the first random number.
705. If the occlusion information indicates that the target ray is occluded by the obstacle particle, a first distance between the autonomous driving object and the obstacle particle is determined based on the second random number.
706. The position of the obstacle particle in the automatic driving scene is determined according to the first distance to simulate a weather state in the automatic driving scene.
In this embodiment, the relevant features of step 704-706 are similar to those of step 303-305 in the embodiment shown in fig. 3, and are not described herein again.
In a possible scenario, the calling process of the reflection probability model, the distance model, and the road environment model (road snow model) is shown in fig. 8, and fig. 8 is a scene flow diagram of another simulation method for an automatic driving scene provided in the embodiment of the present application; for the calling process of the reflection probability model and the distance model, refer to the description of fig. 6, which is not described herein again. And the road surface environment model is called, namely, the snowfall amount (particle amount) of the road surface environment model is updated based on the calculation of the reflection probability model, so that the snowfall amount simulation accuracy is ensured, and the differential snowflake distribution is realized.
In addition, the simulation scene of the above embodiment may also be displayed in a vehicle terminal as an interface display of an automatic driving safety guard or a passenger for an automatic driving process, as shown in fig. 9, fig. 9 is a scene schematic diagram of another simulation method of an automatic driving scene provided in the embodiment of the present application; compared with a simple noise model, the simulation model of the snowing weather of the laser radar can simulate a more real snowing result, is favorable for the comprehensive test of an automatic driving perception algorithm, and promotes the landing of automatic driving application.
In order to better implement the above-mentioned aspects of the embodiments of the present application, the following also provides related apparatuses for implementing the above-mentioned aspects. Referring to fig. 10, fig. 10 is a schematic structural diagram of a simulation apparatus for an automatic driving scene according to an embodiment of the present application, where the simulation apparatus 1000 includes:
an obtaining unit 1001 configured to obtain a target ray emitted by an autonomous driving object in an autonomous driving scene;
a determining unit 1002, configured to determine a reflection probability of the target ray relative to an obstacle particle, the reflection probability being related to a distance traveled by the target ray in the automatic driving scene, the obstacle particle corresponding to a weather condition in the automatic driving scene;
the determining unit 1002 is further configured to determine occlusion information based on a numerical relationship between the reflection probability and a first random number;
a calculating unit 1003, configured to determine a first distance between the autonomous driving object and the obstacle particle based on a second random number if the blocking information indicates that the target ray is blocked by the obstacle particle, where the second random number is selected from a range of values determined based on the first random number, and a value of the second random number is smaller than a value of the first random number;
a simulation unit 1004 for determining a position of the obstacle particle in the auto-driving scene according to the first distance to simulate the weather condition in the auto-driving scene.
Optionally, in some possible implementations of the present application, the determining unit 1002 is specifically configured to determine a particle amount of the obstacle particle in the automatic driving scene;
the determining unit 1002 is specifically configured to input the particle size into a reflection probability model to obtain the reflection probability, where the reflection probability model is set based on a second distance from the target ray to an obstacle in the automatic driving scene.
Optionally, in some possible implementations of the present application, the determining unit 1002 is specifically configured to obtain a calibration reference object in the automatic driving scene;
the determining unit 1002 is specifically configured to determine a calibration coefficient of the reflection probability model based on the calibration reference;
the determining unit 1002 is specifically configured to determine the reflection probability model according to the calibration coefficient;
the determining unit 1002 is specifically configured to input the particle size into the reflection probability model to obtain the reflection probability.
Optionally, in some possible implementations of the present application, the determining unit 1002 is specifically configured to determine, based on the reflection probability model, a corresponding target distance when the reflection probability is a target value;
the determining unit 1002 is specifically configured to filter the second distance according to the target distance.
Optionally, in some possible implementations of the present application, the determining unit 1002 is specifically configured to obtain position information of the automatic driving object;
the determining unit 1002 is specifically configured to determine a road environment model of the automatic driving object in the automatic driving scene;
the determining unit 1002 is specifically configured to input the position information into the road surface environment model to obtain a particle variation amount, where the road surface environment model is set based on an accumulated amount of the obstacle particles in the automatic driving scene;
the determining unit 1002 is specifically configured to update the particle amount according to the particle variation.
Optionally, in some possible implementations of the present application, the determining unit 1002 is specifically configured to determine a target component of the automatic driving object;
the determining unit 1002 is specifically configured to call corresponding road environment models respectively based on the target component, where calibration parameters of the road environment models corresponding to the target component are different.
Optionally, in some possible implementations of the present application, the determining unit 1002 is specifically configured to determine a driving direction of the automatic driving object in the automatic driving scene;
the determining unit 1002 is specifically configured to adjust calibration parameters of the road environment model based on the driving direction to match the target component.
Optionally, in some possible implementations of the present application, the determining unit 1002 is specifically configured to obtain an environmental parameter in the automatic driving scene;
the determining unit 1002 is specifically configured to calibrate the road surface environment model based on the environment parameter, so as to update the calibration parameter.
Optionally, in some possible implementation manners of the present application, the determining unit 1002 is specifically configured to obtain congestion information of the automatic driving object in the automatic driving scene;
the determining unit 1002 is specifically configured to adjust the environment parameter based on the congestion information.
Optionally, in some possible implementation manners of the present application, the calculating unit 1003 is specifically configured to invoke a distance model corresponding to the automatic driving scene;
the calculation unit 1003 is specifically configured to input the second random number into the distance model to determine a first distance between the autonomous driving object and the obstacle particle.
Optionally, in some possible implementations of the present application, the simulation unit 1004 is specifically configured to detect a return time of the target ray if the occlusion information indicates that the target ray is not occluded by the obstacle particle;
the simulation unit 1004 is specifically configured to determine a distance between the autonomous driving object and an obstacle based on the return time.
Acquiring a target ray emitted by an automatic driving object in an automatic driving scene; then determining the reflection probability of the target ray relative to the barrier particles, wherein the reflection probability is related to the distance of the target ray in the automatic driving scene, and the barrier particles correspond to the weather state in the automatic driving scene; determining shielding information based on the numerical relation between the reflection probability and the first random number; when the shielding information indicates that the target ray is shielded by the barrier particle, determining a first distance between the automatic driving object and the barrier particle based on a second random number, wherein the second random number is obtained based on a numerical range determined by the first random number, and the numerical value of the second random number is smaller than that of the first random number; the position of the obstacle particle in the automatic driving scene is further determined according to the first distance so as to simulate the weather state in the automatic driving scene. Therefore, the dynamic simulation process of the automatic driving scene is realized, and the set position of the barrier particles is dynamically set by the target ray emitted by the automatic driving object, so that the barrier particles can be accurately blended into the automatic driving scene, and the accuracy of simulating the weather state in the automatic driving scene is improved.
An embodiment of the present application further provides a terminal device, as shown in fig. 11, which is a schematic structural diagram of another terminal device provided in the embodiment of the present application, and for convenience of description, only a portion related to the embodiment of the present application is shown, and details of the specific technology are not disclosed, please refer to a method portion in the embodiment of the present application. The terminal may be any terminal device including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a point of sale (POS), a vehicle-mounted computer, and the like, taking the terminal as the mobile phone as an example:
fig. 11 is a block diagram illustrating a partial structure of a mobile phone related to a terminal provided in an embodiment of the present application. Referring to fig. 11, the cellular phone includes: radio Frequency (RF) circuitry 1110, memory 1120, input unit 1130, display unit 1140, sensors 1150, audio circuitry 1160, wireless fidelity (WiFi) module 1170, processor 1180, and power supply 1190. Those skilled in the art will appreciate that the handset configuration shown in fig. 11 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 11:
RF circuit 1110 may be used for receiving and transmitting signals during a message transmission or call, and in particular, for receiving downlink messages from a base station and then processing the received downlink messages to processor 1180; in addition, the data for designing uplink is transmitted to the base station. In general, RF circuit 1110 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 1110 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), etc.
The memory 1120 may be used to store software programs and modules, and the processor 1180 may execute various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 1120. The memory 1120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 1130 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 1130 may include a touch panel 1131 and other input devices 1132. The touch panel 1131, also referred to as a touch screen, can collect touch operations of a user on or near the touch panel 1131 (for example, operations of the user on or near the touch panel 1131 using any suitable object or accessory such as a finger, a stylus pen, etc., and a range of touch operations on the touch panel 1131 in an interval), and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 1131 may include two parts, namely, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1180, and can receive and execute commands sent by the processor 1180. In addition, the touch panel 1131 can be implemented by using various types, such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 1130 may include other input devices 1132 in addition to the touch panel 1131. In particular, other input devices 1132 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1140 may be used to display information input by the user or information provided to the user and various menus of the cellular phone. The display unit 1140 may include a display panel 1141, and optionally, the display panel 1141 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like. Further, the touch panel 1131 can cover the display panel 1141, and when the touch panel 1131 detects a touch operation on or near the touch panel, the touch panel is transmitted to the processor 1180 to determine the type of the touch event, and then the processor 1180 provides a corresponding visual output on the display panel 1141 according to the type of the touch event. Although in fig. 11, the touch panel 1131 and the display panel 1141 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1131 and the display panel 1141 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 1150, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1141 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1141 and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 1160, speakers 1161, and microphone 1162 may provide an audio interface between a user and a cell phone. The audio circuit 1160 may transmit the electrical signal converted from the received audio data to the speaker 1161, and convert the electrical signal into a sound signal for output by the speaker 1161; on the other hand, the microphone 1162 converts the collected sound signals into electrical signals, which are received by the audio circuit 1160 and converted into audio data, which are then processed by the audio data output processor 1180, and then transmitted to, for example, another cellular phone via the RF circuit 1110, or output to the memory 1120 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the cell phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 1170, and provides wireless broadband internet access for the user. Although fig. 11 shows the WiFi module 1170, it is understood that it does not belong to the essential constitution of the handset, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1180 is a control center of the mobile phone, and is connected to various parts of the whole mobile phone through various interfaces and lines, and executes various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1120 and calling data stored in the memory 1120, thereby performing overall monitoring of the mobile phone. Optionally, processor 1180 may include one or more processing units; optionally, the processor 1180 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated within processor 1180.
The mobile phone further includes a power supply 1190 (e.g., a battery) for supplying power to each component, and optionally, the power supply may be logically connected to the processor 1180 through a power management system, so that functions of managing charging, discharging, power consumption management, and the like are implemented through the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In the embodiment of the present application, the processor 1180 included in the terminal further has a function of executing the steps of the page processing method.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a server according to an embodiment of the present invention, where the server 1200 may have a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1222 (e.g., one or more processors) and a memory 1232, and one or more storage media 1230 (e.g., one or more mass storage devices) storing an application program 1242 or data 1244. Memory 1232 and storage media 1230 can be, among other things, transient storage or persistent storage. The program stored in the storage medium 1230 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 1222 may be configured to communicate with the storage medium 1230, to execute a series of instruction operations in the storage medium 1230 on the server 1200.
The server 1200 may also include one or more power supplies 1226, one or more wired or wireless network interfaces 1250, one or more input-output interfaces 1258, and/or one or more operating systems 1241, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The steps performed by the management apparatus in the above-described embodiment may be based on the server configuration shown in fig. 12.
An embodiment of the present application further provides a computer-readable storage medium, in which simulation instructions of an automatic driving scenario are stored, and when the simulation instructions are executed on a computer, the computer is enabled to execute the steps executed by the simulation apparatus of the automatic driving scenario in the method described in the foregoing embodiments shown in fig. 3 to 9.
Also provided in an embodiment of the present application is a computer program product including instructions for simulating an automatic driving scenario, which when run on a computer causes the computer to perform the steps performed by the apparatus for simulating an automatic driving scenario in the method described in the foregoing embodiments shown in fig. 3 to 9.
The embodiment of the present application further provides a simulation system of an automatic driving scenario, where the simulation system of an automatic driving scenario may include the simulation apparatus of an automatic driving scenario in the embodiment described in fig. 10, the terminal device in the embodiment described in fig. 11, or the server described in fig. 12.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a simulation apparatus of an autopilot scene, or a network device) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (15)

1. A method for simulating an automatic driving scene, comprising:
acquiring a target ray emitted by an automatic driving object in an automatic driving scene;
determining a reflection probability of the target ray relative to an obstacle particle, the reflection probability being related to a distance the target ray traveled in the autonomous driving scene, the obstacle particle corresponding to a weather condition in the autonomous driving scene;
determining occlusion information based on a numerical relationship of the reflection probability and a first random number;
determining a first distance between the autonomous driving object and the barrier particle based on a second random number selected from a range of values determined based on the first random number if the occlusion information indicates that the target ray is occluded by the barrier particle, the second random number having a value less than a value of the first random number;
determining a position of the obstacle particle in the autonomous driving scenario according to the first distance to simulate the weather condition in the autonomous driving scenario.
2. The method of claim 1, wherein the determining a probability of reflection of the target ray relative to an obstacle particle comprises:
determining a particle amount of the obstacle particle in the auto-driving scene;
inputting the particle quantity into a reflection probability model to obtain the reflection probability, wherein the reflection probability model is set based on a second distance from the target ray to an obstacle in the automatic driving scene.
3. The method of claim 2, wherein inputting the particle quantity into a reflection probability model to obtain the reflection probability comprises:
acquiring a calibration reference object in the automatic driving scene;
determining a calibration coefficient of the reflection probability model based on the calibration reference;
determining the reflection probability model according to the calibration coefficient;
and inputting the particle quantity into the reflection probability model to obtain the reflection probability.
4. The method of claim 2, further comprising:
determining a corresponding target distance when the reflection probability is a target value based on the reflection probability model;
and screening the second distance according to the target distance.
5. The method of claim 2, further comprising:
acquiring position information of the automatic driving object;
determining a road environment model of the autonomous driving object in the autonomous driving scene;
inputting the position information into the road surface environment model to obtain a particle variation amount, the road surface environment model being set based on an accumulated amount of the obstacle particles in the automatic driving scene;
and updating the particle quantity according to the particle variation.
6. The method of claim 5, wherein the determining the model of the road environment of the autonomous driving object in the autonomous driving scenario comprises:
determining a target component of the autonomous driving object;
and respectively calling corresponding road surface environment models based on the target component, wherein the road surface environment models corresponding to the target component have different calibration parameters.
7. The method of claim 6, further comprising:
determining a direction of travel of the autonomous driving object in the autonomous driving scenario;
and adjusting the calibration parameters of the road environment model based on the driving direction so as to match the target component.
8. The method of claim 6, further comprising:
acquiring environmental parameters in the automatic driving scene;
and calibrating the road surface environment model based on the environment parameters so as to update the calibration parameters.
9. The method of claim 8, further comprising:
acquiring congestion information of the automatic driving object in the automatic driving scene;
adjusting the environmental parameter based on the congestion information.
10. The method of claim 1, wherein the determining a first distance between the autonomous driving object and the obstacle particle based on a second random number comprises:
calling a distance model corresponding to the automatic driving scene;
inputting the second random number into the distance model to determine a first distance between the autonomous driving object and the obstacle particle.
11. The method according to any one of claims 1-10, further comprising:
if the shielding information indicates that the target ray is not shielded by the barrier particles, detecting the return time of the target ray;
determining a distance between the autonomous driving object and an obstacle based on the return time.
12. The method of claim 1, wherein the target rays are emitted by a lidar, the obstacle particles are snowflakes, and the weather condition is snowy weather.
13. An apparatus for simulating an autopilot scene, comprising:
the system comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring a target ray emitted by an automatic driving object in an automatic driving scene;
a determining unit configured to determine a reflection probability of the target ray with respect to an obstacle particle, the reflection probability being related to a distance traveled by the target ray in the autonomous driving scene, the obstacle particle corresponding to a weather condition in the autonomous driving scene;
the determining unit is further configured to determine occlusion information based on a numerical relationship between the reflection probability and a first random number;
a calculation unit configured to determine a first distance between the autonomous driving object and the obstacle particle based on a second random number selected from a range of values determined based on the first random number if the occlusion information indicates that the target ray is occluded by the obstacle particle, the second random number having a value smaller than a value of the first random number;
a simulation unit, configured to determine a position of the obstacle particle in the auto-driving scene according to the first distance, so as to simulate the weather condition in the auto-driving scene.
14. A computer device, the computer device comprising a processor and a memory:
the memory is used for storing program codes; the processor is configured to execute the method of simulating an autonomous driving scenario of any of claims 1 to 12 according to instructions in the program code.
15. A computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the method of simulating an autonomous driving scenario of any of claims 1 to 12 above.
CN202011171275.3A 2020-10-28 2020-10-28 Method, device and equipment for simulating automatic driving scene and storage medium Active CN112163280B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011171275.3A CN112163280B (en) 2020-10-28 2020-10-28 Method, device and equipment for simulating automatic driving scene and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011171275.3A CN112163280B (en) 2020-10-28 2020-10-28 Method, device and equipment for simulating automatic driving scene and storage medium

Publications (2)

Publication Number Publication Date
CN112163280A true CN112163280A (en) 2021-01-01
CN112163280B CN112163280B (en) 2022-02-01

Family

ID=73866260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011171275.3A Active CN112163280B (en) 2020-10-28 2020-10-28 Method, device and equipment for simulating automatic driving scene and storage medium

Country Status (1)

Country Link
CN (1) CN112163280B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926224A (en) * 2021-03-30 2021-06-08 深圳裹动智驾科技有限公司 Event-based simulation method and computer equipment
CN113109775A (en) * 2021-04-15 2021-07-13 吉林大学 Millimeter wave radar target visibility judgment method considering target surface coverage characteristics
EP4235214A1 (en) * 2022-02-25 2023-08-30 Toyota Jidosha Kabushiki Kaisha Lidar snowfall simulation method and system for robust 3d object detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6882303B2 (en) * 2002-12-19 2005-04-19 Denso Corporation Obstacle detection system for automotive vehicle
CN108008726A (en) * 2017-12-11 2018-05-08 朱明君 A kind of Intelligent unattended driving
CN110832474A (en) * 2016-12-30 2020-02-21 迪普迈普有限公司 High definition map update
CN111291697A (en) * 2020-02-19 2020-06-16 北京百度网讯科技有限公司 Method and device for recognizing obstacle
CN111679267A (en) * 2020-08-17 2020-09-18 陕西耕辰科技有限公司 Automatic driving system and obstacle detection system thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6882303B2 (en) * 2002-12-19 2005-04-19 Denso Corporation Obstacle detection system for automotive vehicle
CN110832474A (en) * 2016-12-30 2020-02-21 迪普迈普有限公司 High definition map update
CN108008726A (en) * 2017-12-11 2018-05-08 朱明君 A kind of Intelligent unattended driving
CN111291697A (en) * 2020-02-19 2020-06-16 北京百度网讯科技有限公司 Method and device for recognizing obstacle
CN111679267A (en) * 2020-08-17 2020-09-18 陕西耕辰科技有限公司 Automatic driving system and obstacle detection system thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵一兵等: "基于D-S证据理论的障碍目标身份识别", 《吉林大学学报(工学版)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926224A (en) * 2021-03-30 2021-06-08 深圳裹动智驾科技有限公司 Event-based simulation method and computer equipment
CN112926224B (en) * 2021-03-30 2024-02-02 深圳安途智行科技有限公司 Event-based simulation method and computer equipment
CN113109775A (en) * 2021-04-15 2021-07-13 吉林大学 Millimeter wave radar target visibility judgment method considering target surface coverage characteristics
EP4235214A1 (en) * 2022-02-25 2023-08-30 Toyota Jidosha Kabushiki Kaisha Lidar snowfall simulation method and system for robust 3d object detection

Also Published As

Publication number Publication date
CN112163280B (en) 2022-02-01

Similar Documents

Publication Publication Date Title
CN112163280B (en) Method, device and equipment for simulating automatic driving scene and storage medium
CN112364439B (en) Simulation test method and device for automatic driving system and storage medium
CN112256589B (en) Simulation model training method and point cloud data generation method and device
CN109241465B (en) Interface display method, device, terminal and storage medium
CN109919251A (en) A kind of method and device of object detection method based on image, model training
CN110044371A (en) A kind of method and vehicle locating device of vehicle location
CN109325967A (en) Method for tracking target, device, medium and equipment
CN110146100A (en) Trajectory predictions method, apparatus and storage medium
CN110058694A (en) Method, the method and device of Eye-controlling focus of Eye-controlling focus model training
CN107818288A (en) Sign board information acquisition method and device
CN110443190B (en) Object recognition method and device
CN112802111B (en) Object model construction method and device
CN115588131B (en) Model robustness detection method, related device and storage medium
CN113923775B (en) Method, device, equipment and storage medium for evaluating quality of positioning information
CN113820694A (en) Simulation ranging method, related device, equipment and storage medium
CN112562372B (en) Track data processing method and related device
CN115526055B (en) Model robustness detection method, related device and storage medium
CN113110487A (en) Vehicle simulation control method and device, electronic equipment and storage medium
CN115471495B (en) Model robustness detection method, related device and storage medium
CN113706446A (en) Lens detection method and related device
CN114814767A (en) Information processing method and device, electronic equipment and storage medium
CN109685904A (en) Virtual driving modeling method and system based on virtual reality
CN112200130B (en) Three-dimensional target detection method and device and terminal equipment
CN111986487B (en) Road condition information management method and related device
CN115909020B (en) Model robustness detection method, related device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant