CN112256589A - Simulation model training method and point cloud data generation method and device - Google Patents

Simulation model training method and point cloud data generation method and device Download PDF

Info

Publication number
CN112256589A
CN112256589A CN202011254212.4A CN202011254212A CN112256589A CN 112256589 A CN112256589 A CN 112256589A CN 202011254212 A CN202011254212 A CN 202011254212A CN 112256589 A CN112256589 A CN 112256589A
Authority
CN
China
Prior art keywords
data
trained
point cloud
cloud data
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011254212.4A
Other languages
Chinese (zh)
Other versions
CN112256589B (en
Inventor
胡太群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011254212.4A priority Critical patent/CN112256589B/en
Publication of CN112256589A publication Critical patent/CN112256589A/en
Application granted granted Critical
Publication of CN112256589B publication Critical patent/CN112256589B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Computer Graphics (AREA)
  • Mathematical Physics (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Geometry (AREA)

Abstract

The application discloses a simulation model training method based on artificial intelligence technology, which can be used in the field of automatic driving, and comprises the following steps: acquiring real point cloud data and associated training data of an object to be trained; acquiring simulation point cloud data of the object to be trained through a laser radar simulation model to be trained based on the associated training data of the object to be trained; determining a discrimination result through a discriminator based on real point cloud data of an object to be trained and simulated point cloud data of the object to be trained; and training the laser radar simulation model to be trained according to the judgment result until the model training condition is met, so as to obtain the laser radar simulation model. The embodiment of the application also provides a method and a device for generating the point cloud data. According to the method and the device, the influence of complex real environments on the generated point cloud data can be learned from more layers, so that the precision of a laser radar simulation model is improved, and the deviation between the simulation point cloud data and the real point cloud data is reduced.

Description

Simulation model training method and point cloud data generation method and device
Technical Field
The application relates to the technical field of machine learning, in particular to a training method of a simulation model, and a method and a device for generating point cloud data.
Background
Accurate environmental awareness and precise positioning are key to enabling reliable navigation, information decision-making, and safe driving of an autonomous vehicle in a complex environment. The two tasks need to acquire and process accurate and rich data in a real environment, and in order to acquire the data, point cloud data used for simulating a traffic scene can be generated through a laser radar simulation model.
In order to ensure and improve the quality of point cloud data in an automatic driving simulation system, a data screening scheme can be adopted at present. Specifically, abnormal values can be eliminated based on the statistical properties of the distribution of the objects in the point cloud and the object detection accuracy, point cloud data used for training are obtained, and the point cloud data are used for building a laser radar simulation model.
The accuracy of the simulation data mainly depends on the accuracy of the point cloud data obtained by the simulation of the laser radar simulation model, however, the point cloud data obtained based on the data screening scheme still has great limitation and is difficult to adapt to a complex real environment, which results in low accuracy of the laser radar simulation model, and therefore, the point cloud data output by the laser radar simulation model and the point cloud data in the real environment may have great deviation.
Disclosure of Invention
The embodiment of the application provides a training method of a simulation model, a point cloud data generation method and a device, a laser radar simulation model can learn the influence of a complex real environment on the point cloud data generation from more layers, so that the precision of the laser radar simulation model is improved, and the deviation between the simulation point cloud data and the real point cloud data is reduced.
In view of the above, an aspect of the present application provides a method for training a simulation model, including:
acquiring real point cloud data and associated training data of an object to be trained, wherein the associated training data and the real point cloud data have a corresponding relation, and the associated training data comprises at least one of scene data, environment data and attribute data;
acquiring simulation point cloud data of the object to be trained through a laser radar simulation model to be trained based on the associated training data of the object to be trained;
determining a discrimination result through a discriminator based on real point cloud data of an object to be trained and simulated point cloud data of the object to be trained;
and training the laser radar simulation model to be trained according to the judgment result until the model training condition is met, so as to obtain the laser radar simulation model.
Another aspect of the present application provides a method for generating point cloud data, including:
acquiring associated test data corresponding to a target object, wherein the associated test data comprises at least one of scene data, environment data and attribute data;
and generating simulation point cloud data corresponding to the target object through a laser radar simulation model based on the associated test data corresponding to the target object, wherein the laser radar simulation model is obtained by adopting the training method.
Another aspect of the present application provides a simulation model training apparatus, including:
the device comprises an acquisition module, a storage module and a training module, wherein the acquisition module is used for acquiring real point cloud data and associated training data of an object to be trained, the associated training data and the real point cloud data have a corresponding relation, and the associated training data comprises at least one of scene data, environment data and attribute data;
the acquisition module is also used for acquiring simulation point cloud data of the object to be trained through the laser radar simulation model to be trained based on the associated training data of the object to be trained;
the determining module is used for determining a judgment result through the discriminator based on the real point cloud data of the object to be trained and the simulation point cloud data of the object to be trained;
and the training module is used for training the laser radar simulation model to be trained according to the judgment result until the model training condition is met, so as to obtain the laser radar simulation model.
In one possible design, in another implementation manner of another aspect of the embodiment of the present application, the associated training data of the object to be trained includes scene data;
the acquisition module is specifically used for receiving motion data aiming at an object to be trained through a human-computer interface;
controlling the object to be trained to move in the test scene based on the motion data;
acquiring real point cloud data through laser radar equipment based on the motion condition of an object to be trained in a test scene;
scene data are acquired through data acquisition equipment based on the motion condition of an object to be trained in a test scene.
In one possible design, in another implementation of another aspect of the embodiments of the present application, the scene data includes at least one of a distance, a speed, and an included angle;
the acquisition module is specifically used for acquiring the distance between the object to be trained and the laser radar equipment through the distance measuring device based on the motion condition of the object to be trained in the test scene if the scene data comprises the distance;
if the scene data comprises the speed, acquiring the speed of the object to be trained relative to the laser radar equipment through a speed measuring device based on the motion condition of the object to be trained in the test scene;
and if the scene data comprises the included angle, acquiring the included angle of the object to be trained relative to the laser radar equipment through the angle measuring device based on the motion condition of the object to be trained in the test scene.
In one possible design, in another implementation of another aspect of the embodiments of the present application, the associated training data of the subject to be trained includes environmental data;
the acquisition module is specifically used for acquiring real point cloud data through laser radar equipment;
the environmental data is received through a human-machine interface.
In one possible design, in another implementation of another aspect of an embodiment of the present application, the environmental data includes at least one of weather information, temperature, humidity, wind direction, wind power, and ultraviolet index;
the acquisition module is specifically used for receiving a weather selection instruction through a human-computer interface if the environment data comprises weather information, wherein the weather selection instruction carries an identifier of the weather information;
if the environmental data comprises a temperature, receiving a first parameter aiming at the temperature through a human-computer interface;
if the environmental data comprises humidity, receiving a second parameter aiming at the humidity through a human-computer interface;
if the environment data comprises a wind direction, receiving a wind direction selection instruction through a human-computer interface, wherein the wind direction selection instruction carries an identification of the wind direction;
if the environmental data comprises wind power, receiving a third parameter aiming at the wind power through a human-computer interface;
and if the environment data comprises the ultraviolet index, receiving an intensity selection instruction through a human-computer interface, wherein the intensity instruction carries an identifier of the ultraviolet index.
In one possible design, in another implementation of another aspect of an embodiment of the present application, the environmental data includes a road surface type;
the acquisition module is specifically used for receiving a type selection instruction through a human-computer interface, wherein the type selection instruction carries an identifier of a road surface type.
In one possible design, in another implementation of another aspect of the embodiment of the present application, the associated training data of the object to be trained includes attribute data, and the attribute data includes at least one of an object type, an object size, and a reaction level;
the acquisition module is specifically used for acquiring real point cloud data through laser radar equipment;
and acquiring attribute data aiming at the object to be trained through a data input interface.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the device comprises an acquisition module, a comparison module and a comparison module, wherein the acquisition module is specifically used for acquiring real point cloud data to be matched corresponding to an object to be trained, the real point cloud data to be matched comprises M pieces of first data to be matched, each piece of first data to be matched corresponds to a timestamp, and M is an integer greater than or equal to 1;
acquiring associated training data to be matched, wherein the associated training data to be matched comprises M second data to be matched, and each second data to be matched corresponds to a timestamp;
and matching the real point cloud data to be matched and the associated training data to be matched according to the time stamp corresponding to each first data to be matched and the time stamp corresponding to each second data to be matched to obtain matched real point cloud data and associated training data.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the determining module is specifically used for acquiring the similarity between the real point cloud data and the simulation point cloud data through the discriminator based on the real point cloud data of the object to be trained and the simulation point cloud data of the object to be trained;
and the training module is specifically used for determining that the model training condition is met if the similarity is greater than or equal to the similarity threshold, and taking the laser radar simulation model to be trained as the laser radar simulation model.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the determining module is specifically used for acquiring position information of K first key points according to real point cloud data of an object to be trained, wherein K is an integer greater than or equal to 1;
acquiring position information of K second key points according to simulation point cloud data of an object to be trained, wherein the second key points and the first key points have a mapping relation;
determining the similarity between the real point cloud data and the simulation point cloud data through a discriminator aiming at the position information of K pairs of key points, wherein each pair of key points comprises a first key point and a second key point which have a mapping relation;
and the training module is specifically used for determining that the model training condition is met if the similarity is greater than or equal to the similarity threshold, and taking the laser radar simulation model to be trained as the laser radar simulation model.
Another aspect of the present application provides a point cloud data generating apparatus, including:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring associated test data corresponding to a target object, and the associated test data comprises at least one of scene data, environment data and attribute data;
and the generation module is used for generating simulation point cloud data corresponding to the target object through a laser radar simulation model based on the associated test data corresponding to the target object, wherein the laser radar simulation model is obtained by adopting the training method.
Another aspect of the present application provides a computer device, comprising: a memory, a processor, and a bus system;
wherein, the memory is used for storing programs;
the processor is used for executing the program in the memory, and the processor is used for executing the method provided by the aspects according to the instructions in the program code;
the bus system is used for connecting the memory and the processor so as to enable the memory and the processor to communicate.
Another aspect of the present application provides a computer-readable storage medium having stored therein instructions, which when executed on a computer, cause the computer to perform the method of the above-described aspects.
In another aspect of the application, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided by the above aspects.
According to the technical scheme, the embodiment of the application has the following advantages:
the embodiment of the application provides a training method of a simulation model, which comprises the steps of firstly obtaining real point cloud data and associated training data of an object to be trained, then obtaining the simulation point cloud data of the object to be trained through the laser radar simulation model to be trained based on the associated training data of the object to be trained, then determining a judgment result through a discriminator based on the real point cloud data of the object to be trained and the simulation point cloud data of the object to be trained, and finally training the laser radar simulation model to be trained according to the judgment result until model training conditions are met to obtain the laser radar simulation model. Through the mode, at least one of scene data, environment data and attribute data is introduced to serve as a training parameter in the process of training the laser radar simulation model, so that the laser radar simulation model can learn the influence of a complex real environment on point cloud data generation from more layers, the precision of the laser radar simulation model is improved, and the deviation between the simulation point cloud data and the real point cloud data is reduced.
Drawings
FIG. 1 is a schematic diagram of an environment of a point cloud data simulation system according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an architecture of an automatic driving simulation system according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an embodiment of a simulation model training method in an embodiment of the present application;
FIG. 4 is a schematic diagram of a training framework of a lidar simulation model in an embodiment of the present application;
FIG. 5 is a schematic diagram of inputting motion data through a human-machine interface according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a test scenario in an embodiment of the present application;
FIG. 7 is a schematic diagram of inputting environmental data via a human-machine interface according to an embodiment of the present application;
FIG. 8 is another schematic diagram of the input of environmental data via a human-machine interface in an embodiment of the present application;
FIG. 9 is a schematic diagram of attribute data input via a human-machine interface according to an embodiment of the present application;
FIG. 10 is a schematic diagram of aligning real point cloud data and associated training data according to an embodiment of the present application;
FIG. 11 is a schematic diagram illustrating an implementation of laser radar simulation model training based on a similarity algorithm in an embodiment of the present application;
FIG. 12 is a schematic diagram of an embodiment of a method for generating point cloud data according to an embodiment of the present disclosure;
FIG. 13 is a schematic flow chart illustrating a method for generating point cloud data according to an embodiment of the present disclosure;
FIG. 14 is a schematic diagram of an embodiment of a simulation model training apparatus according to an embodiment of the present application;
FIG. 15 is a schematic diagram of an embodiment of a point cloud data generating apparatus in an embodiment of the present application;
FIG. 16 is a schematic structural diagram of a server in an embodiment of the present application;
fig. 17 is a schematic structural diagram of a terminal device in the embodiment of the present application.
Detailed Description
The embodiment of the application provides a training method of a simulation model, a point cloud data generation method and a device, a laser radar simulation model can learn the influence of a complex real environment on the point cloud data generation from more layers, so that the precision of the laser radar simulation model is improved, and the deviation between the simulation point cloud data and the real point cloud data is reduced.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "corresponding" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The point cloud data is a representation form of popular space real-scene three-dimensional information in the world, has a large number of mature application examples, and the fineness of the data is continuously improved along with the continuous progress of point cloud data acquisition equipment. The point cloud data is composed of a plurality of points, a three-dimensional model (e.g., a vehicle, a pedestrian, a building, an obstacle, etc.) can be constructed using a three-dimensional reconstruction (e.g., poisson reconstruction, surface reconstruction, human reconstruction, building reconstruction, and real-time reconstruction) technique, and the three-dimensional model constructed by the point cloud data can be used in various fields, for example, a mapping field, an automatic driving field, an agricultural field, a planning and designing field, an archaeological field, and a medical field, etc. The present application will be described by way of example as applied to the field of autonomous driving, however this should not be construed as limiting the present application. Wherein, the automatic driving field relates to the automatic driving technology, the automatic driving technology generally comprises the technologies of high-precision maps, environment perception, behavior decision, path planning, motion control and the like, the self-determined driving technology has wide application prospect,
it should be understood that the laser radar simulation model is obtained through training by the simulation model training method provided by the application, and simulation point cloud data is generated through the laser radar simulation model, so that a large amount of data for training or testing can be provided for the automatic driving platform by directly using the simulation point cloud data.
In order to generate more accurate simulated point cloud data, the present application provides a point cloud data simulation system, please refer to fig. 1, fig. 1 is an environment schematic diagram of the point cloud data simulation system in the embodiment of the present application, as shown in the figure, the point cloud data simulation system includes a server and a terminal device, and a Human Machine Interface (HMI) is deployed on the terminal device. The server related to the application can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, safety service, Content Delivery Network (CDN), big data and an artificial intelligence platform. The terminal device may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a palm computer, a personal computer, a smart television, a smart watch, and the like. The terminal device and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein. The number of servers and terminal devices is not limited.
Specifically, a user can input associated training data through a human-computer interface, then the movement of an object to be trained is controlled based on the associated training data, the environment is controlled to make corresponding feedback, real point cloud data of the object to be trained is captured by laser radar equipment deployed in a test scene and uploaded to a server, and the server trains a laser radar simulation model according to the associated training data, the real point cloud data and simulation point cloud data. The process of model training involves Machine Learning (ML) techniques based on the field of Artificial Intelligence (AI).
AI is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, AI is an integrated technique of computer science that attempts to understand the essence of intelligence and produces a new intelligent machine that can react in a manner similar to human intelligence. AI is to study the design principles and implementation methods of various intelligent machines, so that the machine has the functions of perception, reasoning and decision making. The AI technology is a comprehensive subject, and relates to the field of extensive technology, both hardware level technology and software level technology. The AI base technologies generally include technologies such as sensors, dedicated AI chips, cloud computing, distributed storage, big data processing technologies, operating/interactive systems, mechatronics, and the like. The AI software technology mainly includes several major directions such as computer vision technology, speech processing technology, natural language processing technology, and ML/deep learning.
With the research and progress of the AI technology, the AI technology is researched and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical services, smart customer service, etc., and it is believed that with the development of the technology, the AI technology will be applied in more fields and exert more and more important values.
ML is a multi-field interdisciplinary, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. ML is the core of AI, is the fundamental way to make computers intelligent, and is applied throughout various areas of AI. ML and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, migratory learning, inductive learning, and formal learning.
Based on the above description, an important function of the point cloud data simulation system is to find out possible problems in the automatic driving process by simulating a real environment and constructing a vehicle model. In an automatic driving vehicle, a vehicle automatic driving system plays an important safety role, and a central controller of the vehicle automatic driving system can judge potential danger in the driving process and cope with various emergencies through calculation and analysis of various information, so as to automatically control the vehicle to drive, remind personnel on the vehicle of possible potential danger before the danger occurs and help a driver to apply steering or braking when the danger occurs. Based on this, for the convenience of understanding, please refer to fig. 2, fig. 2 is a schematic structural diagram of an automatic driving simulation system in an embodiment of the present application, and as shown in the drawing, each part will be described in detail below.
Firstly, a scene simulator;
for a scene simulator, the main function is to construct different test scenes, and the test scenes mainly comprise simulated object behaviors, weather behaviors and traffic light behaviors. The simulated object behaviors include vehicle behaviors, pedestrian behaviors, and obstacle behaviors. Weather behavior includes weather, time, and lighting changes. The traffic light behaviors comprise protected traffic lights, unprotected traffic lights and no traffic lights, wherein the protected traffic lights are common, namely the traffic lights with arrows run straight or turn according to the traffic lights of corresponding lanes. The traffic lights without protection, namely the round traffic lights, can turn while being straight on the opposite side, and can turn after the vehicle is selected to give way, attention needs to be paid to the vehicle which is straight on the opposite side. No traffic light is usually arranged at a suburban intersection, and the condition that whether a vehicle passes through to give way or stop needs to be judged, and then the vehicle passes through the intersection.
Secondly, a scene generator;
in order to create different scenes according to different requirements, maps, vehicles and behaviors can be dynamically added. The scene generator is a framework and supports dynamically creating different scenes through different configurations to meet requirements.
Third, Application Programming Interface (API);
the behavior of the simulator can be controlled in a python API mode, manual operation of a graphical interface is not needed, and automatic deployment can be achieved. The API is used for realizing a uniform interface according to the functions of the simulator and realizing interaction.
Fourthly, deploying the container;
in order to improve the testing efficiency, the system can be deployed in a containerization mode. I.e. a management platform is created to implement the management of the simulator. The container deployment platform can monitor the state of the corresponding simulator, provide a visual configuration interface, and generate and deploy different scenes. Specifically, the monitoring state of the simulator can be monitored, normal and problematic clusters can be displayed, logs can be saved, and stability of the clusters can be maintained. Different configurations (different maps, vehicles and behaviors) can be conveniently selected through visualization to generate different scenes, and secondly, the details of a simulation cluster are shielded through visualization feedback of a simulation result, so that the use is more visual and convenient.
Fifthly, a simulator;
the emulator typically has functions of reset, snapshot, playback, and statistics. The reset means that the environment and the vehicle can be reset to the initial state after the fault occurs, and the corresponding automatic driving system is required to be reset. Thus, after each fault, the test can be automatically recovered without manual operation. The snapshot can generate the information of the corresponding frame, and the snapshot can be saved, so that the accident scene can be recovered, and meanwhile, the snapshot can be used for constructing the automatic driving data set. The saved real point cloud data is used as the input of ML to train the model. The playback function is mainly used for fault location, and after a collision, the playback information is used for locating problems. Statistics are mainly used to measure the stability of the system.
With reference to the above description, the solutions provided in the embodiments of the present application relate to ML technology of AI, automatic driving technology, and the like, and a method for training a simulation model in the present application will be described below, with reference to fig. 3, where an embodiment of the method for training a simulation model in the embodiments of the present application includes:
101. acquiring real point cloud data and associated training data of an object to be trained, wherein the associated training data and the real point cloud data have a corresponding relation, and the associated training data comprises at least one of scene data, environment data and attribute data;
in this embodiment, the simulation model training apparatus first needs to determine an object to be trained, and then obtains real point cloud data and associated training data corresponding to the object to be trained.
The object to be trained includes, but is not limited to, a vehicle, a simulated pedestrian, and an obstacle.
The associated training data includes, but is not limited to, scene data representing a relative relationship between an object to be trained and a LiDAR (Light Detection and Ranging) device, environmental data representing environmental conditions within a test scene, and attribute data representing attributes related to the object to be trained.
Real point cloud data refers to a collection of vectors generated by a LiDAR device in a three-dimensional coordinate system, typically represented in the form of X, Y, and Z three-dimensional coordinates, and may include color, classification values, reflection intensity values, time, and the like, which are not exhaustive.
Specifically, the real point cloud data file may be in the form of a 3D coordinate file, and thus may be read by the simulation model training apparatus, and for example, the real point cloud data in a gray scale may be in the form of "X1, Y1, Z1, gray scale value 1" in the 3D coordinate file. Illustratively, a color real point cloud data is taken as an example, and the form of the data in a 3D coordinate file may be "X1, Y1, Z1, r1, g1, b 1".
It should be noted that the simulation model training apparatus may be deployed in a server, a terminal device, or a system composed of a server and a terminal device, and the present application is not limited thereto.
102. Acquiring simulation point cloud data of the object to be trained through a laser radar simulation model to be trained based on the associated training data of the object to be trained;
in this embodiment, the simulation model training device inputs the associated training data of the object to be trained into the lidar simulation model to be trained, and the lidar simulation model to be trained outputs the simulation point cloud data corresponding to the object to be trained. During the training process, the associated training data may be converted into the form of feature vectors.
103. Determining a discrimination result through a discriminator based on real point cloud data of an object to be trained and simulated point cloud data of the object to be trained;
in this embodiment, the simulation model training apparatus takes the real point cloud data and the simulation point cloud data of the object to be trained as the input of the discriminator, and the discriminator outputs the discrimination result.
Specifically, a Generative Adaptive Network (GAN) can be used to train the lidar simulation model to be trained, where GAN is a deep learning model and generates better output through mutual game learning of a generator and a discriminator. The laser radar simulation model to be trained belongs to a generator to be trained, and the laser radar simulation model after training is the generator. The generator is used for learning the characteristics of the real point cloud data, so that the self-generated simulation point cloud data is more real to cheat the discriminator, and the discriminator needs to discriminate the true and false of the received simulation point cloud data.
104. And training the laser radar simulation model to be trained according to the judgment result until the model training condition is met, so as to obtain the laser radar simulation model.
In this embodiment, the simulation model training device trains the generator according to the discrimination result, that is, the to-be-trained lidar simulation model is trained, in the whole process, the to-be-trained lidar simulation model makes the generated simulation point cloud data more real as much as possible, and the discriminator makes an effort to identify the truth of the simulation point cloud data, which is equivalent to a game, and as time goes on, the to-be-trained lidar simulation model and the discriminator continuously compete, and finally, two networks reach a dynamic balance. Simulation point cloud data generated by the laser radar simulation model to be trained is close to real point cloud data, and the discriminator cannot identify the real point cloud data or the simulation point cloud data.
For easy understanding, please refer to fig. 4, fig. 4 is a schematic diagram of a training framework of a lidar simulation model in an embodiment of the present application, and as shown in the figure, associated training data is input to a generator (i.e., the lidar simulation model to be trained), and the generator outputs simulation point cloud data of an object to be trained. The LiDAR equipment can detect real point cloud data of an object to be trained, then the discriminator compares the simulation point cloud data with the real point cloud data to obtain a discrimination result, and model parameters of a generator (namely a laser radar simulation model to be trained) are updated based on the discrimination result.
It is understood that the model training condition may include three cases, the first case is that if the judgment result is that the prediction probability value is 0.5, it indicates that the judger cannot judge whether the simulated point cloud data is true or false, and the model training condition is satisfied. On the contrary, if the discriminator can judge the authenticity of the simulation point cloud data, the model training condition is not satisfied. And in the second case, if the judgment result is the similarity between the simulation point cloud data and the real point cloud data, and if the similarity is greater than or equal to a similarity threshold value, the model training condition is met. Otherwise, if the similarity is smaller than the similarity threshold, the model training condition is not met. And in the third situation, an iteration threshold value is preset, and if the iteration times reach the iteration threshold value, the model training condition is met. Otherwise, if the iteration times do not reach the iteration threshold, the model training condition is not met.
The embodiment of the application provides a training method of a simulation model, which comprises the steps of firstly obtaining real point cloud data and associated training data of an object to be trained, then obtaining the simulation point cloud data of the object to be trained through the laser radar simulation model to be trained based on the associated training data of the object to be trained, then determining a judgment result through a discriminator based on the real point cloud data of the object to be trained and the simulation point cloud data of the object to be trained, and finally training the laser radar simulation model to be trained according to the judgment result until model training conditions are met to obtain the laser radar simulation model. Through the mode, at least one of scene data, environment data and attribute data is introduced to serve as a training parameter in the process of training the laser radar simulation model, so that the laser radar simulation model can learn the influence of a complex real environment on point cloud data generation from more layers, the precision of the laser radar simulation model is improved, and the deviation between the simulation point cloud data and the real point cloud data is reduced.
Optionally, on the basis of the embodiment corresponding to fig. 3, in another optional embodiment provided in the embodiment of the present application, the associated training data of the object to be trained includes scene data;
the method comprises the following steps of obtaining real point cloud data and associated training data of an object to be trained, and specifically comprises the following steps:
receiving motion data aiming at an object to be trained through a human-computer interface;
controlling the object to be trained to move in the test scene based on the motion data;
acquiring real point cloud data through laser radar equipment based on the motion condition of an object to be trained in a test scene;
scene data are acquired through data acquisition equipment based on the motion condition of an object to be trained in a test scene.
In this embodiment, a method for customizing a motion state of an object to be trained is introduced. If the simulation model training device is deployed in the server, the motion data received by the terminal device through the HMI is received, and if the simulation model training device is deployed in the terminal device, the motion data received by the HMI is directly received, which is not limited herein.
For ease of understanding, referring to fig. 5, fig. 5 is a schematic diagram of the motion data input through the human-machine interface in the embodiment of the present application, as shown in the HMI provided with an input area for the test date, an input area for the tester, and an input area for the object identification, for example, input "11/6/2020" in the input area for the test date. The number of the tester is input in the input area of the tester, and optionally, the number of the tester is not filled in. The identification of the objects to be trained is entered in an input area for object identifications, each identification corresponding to a unique object, e.g. the identification of car a is C0006. The type of motion can be selected in the input area of the motion data, and the speed, acceleration, motion time and the like of the object to be trained can be input. After the input is completed, a 'confirmation' button is clicked, and the configuration of the motion data is completed.
It should be noted that the motion data shown in fig. 5 is mainly set for the vehicle, and if the motion data of the pedestrian (i.e., the simulated dummy) needs to be configured, the motion type related to the pedestrian, such as walking, running, jumping, etc., may also be set, and is not limited herein.
Specifically, after the motion data of the object to be trained is set, the object to be trained starts to move in the test scene according to the set motion data, and based on the motion data, the LiDAR device can detect the position, the speed and other characteristic quantities of the object to be trained through emitting laser beams, so that real point cloud data are obtained. LiDAR devices operate on the principle of transmitting a probe signal (or laser beam) toward a target and then comparing the received signal reflected from the target (the target echo) to the transmitted signal to obtain information about the target, such as distance, orientation, altitude, speed, attitude, and shape parameters, to detect, track, and identify objects such as vehicles, pedestrians, and obstacles. Meanwhile, scene data are acquired through data acquisition equipment.
The manner in which LiDAR equipment acquires real point cloud data is largely divided into three major categories, namely, satellite-borne, airborne, and ground. The satellite-borne LiDAR equipment adopts a satellite platform, has high operation orbit and wide observation visual field. The airborne LiDAR equipment mainly acquires large-scale point cloud data by means of an unmanned aerial vehicle. The ground is divided into three-dimensional laser scanning on the ground, a vehicle-mounted mobile measurement system and handheld laser scanning.
It will be appreciated that in practice, in addition to being able to use LiDAR devices to acquire real point cloud data, stereo cameras (stereo cameras) and transit-time cameras (time-of-flight cameras) may be used, which measure information about a large number of points on the surface of an object in an automated manner and then output point cloud data using a data file. The point cloud data is the real point cloud data collected by the scanning device.
Secondly, in this application embodiment, a self-defined mode of waiting to train the object motion state is provided, through above-mentioned mode, can be according to actual demand input with wait to train the relevant motion data of object, reach the purpose of automatic control waiting to train the object motion from this, consequently, have higher degree of automation, in addition, set up different motion situation and can adapt to different scenes better to promote the authenticity and the adaptability of training.
Optionally, on the basis of the embodiment corresponding to fig. 3, in another optional embodiment provided in the embodiment of the present application, the scene data includes at least one of a distance, a speed, and an included angle;
based on the motion condition of the object to be trained in the test scene, acquiring scene data through data acquisition equipment, and specifically comprising the following steps:
if the scene data comprises the distance, acquiring the distance between the object to be trained and the laser radar equipment through a distance measuring device based on the motion condition of the object to be trained in the test scene;
if the scene data comprises the speed, acquiring the speed of the object to be trained relative to the laser radar equipment through a speed measuring device based on the motion condition of the object to be trained in the test scene;
and if the scene data comprises the included angle, acquiring the included angle of the object to be trained relative to the laser radar equipment through the angle measuring device based on the motion condition of the object to be trained in the test scene.
In this embodiment, a method for collecting scene data based on a data collection device is described. After the motion data of the object to be trained is set, the object to be trained is controlled to start to move in the test scene according to the set motion data. For convenience of introduction, please refer to fig. 6, where fig. 6 is a schematic diagram of a test scenario in an embodiment of the present application, and as shown in the figure, the test scenario needs to be constructed first in an actual test, where the test scenario includes, but is not limited to, a traffic road test yard, various types of vehicles, pedestrians (i.e., simulated dummy), obstacles, and the like, and where K1 indicates an object to be trained, i.e., a car. It should be understood that the test scenario shown in fig. 6 is only an illustration and should not be construed as a limitation of the present application.
Specifically, scene data is acquired through data acquisition equipment based on the motion condition of an object to be trained in a test scene. The scene data comprises at least one of distance, speed and included angle, and the data acquisition equipment comprises but is not limited to a distance measuring device, a speed measuring device and an angle measuring device. The manner in which the distance, velocity and angle are obtained will be described separately below,
firstly, distance;
the ranging device may capture the distance between the object to be trained and the LiDAR equipment as the object to be trained moves or is stationary within the test scene. The range finder includes, but is not limited to, a photoelectric range finder, which is further classified into a phase range finder and a pulse range finder, and an acoustic range finder. The phase distance meter modulates the phase of laser light and measures the phase difference of the reflected laser light to obtain the distance. The pulse distance measuring instrument emits a beam of light to a target object, and measures the time of the target object reflecting the light back, thereby calculating the distance between the instrument and the target object.
An acoustic range finder is a device that performs measurement using reflection characteristics of an acoustic wave, and generally employs an ultrasonic wave as a modulation target, i.e., an ultrasonic range finder. The ultrasonic transmitter transmits ultrasonic waves to a certain direction, timing is started at the same time of transmitting, the ultrasonic waves are transmitted in the air and return immediately when encountering an obstacle on the way, and the ultrasonic receiver immediately interrupts and stops timing when receiving the reflected waves. By continuously detecting the echo reflected by the barrier when the generated wave is transmitted, the time difference between the transmitted ultrasonic wave and the received echo is measured, and then the distance is calculated.
Secondly, speed;
the speed measurement device may capture the speed of the object to be trained relative to the LiDAR device as the object moves within the test scene. The speed measuring device is an instrument for measuring the running speed of an object to be trained. Speed measuring devices include, but are not limited to, radar velocimeters and laser velocimeters. The speed measurement principle of the radar velocimeter is Doppler effect; the speed measuring principle of the laser velocimeter is laser distance measurement.
Thirdly, included angle;
the goniometer device may capture the angle of an object to be trained relative to the LiDAR equipment as the object moves or is stationary within the test scene. Goniometers include, but are not limited to, theodolites, crystal goniometers, single-turn reflex goniometers, and double-turn reflex goniometers, which are instruments that measure crystal face angles to study crystal geometry. Contact goniometers and reflection goniometers are commonly used. The single-ring reflection goniometer mainly comprises a horizontal ring, a light pipe, a telescope and a crystal pulling platform. The measurement is performed by an optical system according to the property of the crystal face to the light reflection. The double-turn reflection goniometer measures a set of spherical coordinate values of each crystal plane, namely azimuth angle and polar distance angle values. The theodolite includes vernier theodolite, optical theodolite and electronic theodolite.
In the embodiment of the application, scene data are collected based on the data collection equipment, and in the mode, the relevant devices can be used for capturing the corresponding scene data, model training is carried out by using the scene data, and the influence of the motion condition of an object on the generation of point cloud data can be learned, so that the precision of a laser radar simulation model is improved, and the deviation between the simulation point cloud data and the real point cloud data is reduced.
Optionally, on the basis of the embodiment corresponding to fig. 3, in another optional embodiment provided in the embodiment of the present application, the associated training data of the object to be trained includes environment data;
the method comprises the following steps of obtaining real point cloud data and associated training data of an object to be trained, and specifically comprises the following steps:
acquiring real point cloud data through laser radar equipment;
the environmental data is received through a human-machine interface.
In this embodiment, a method of customizing environment data is introduced. If the simulation model training device is deployed in the server, the environment data received by the terminal device through the human-computer interface is received, and if the simulation model training device is deployed in the terminal device, the environment data received by the terminal device through the human-computer interface is directly received, which is not limited here.
For example, assuming that not only motion data of an object to be trained but also environment data are set, the object to be trained starts to move within a test scene according to the set motion data, and the LiDAR device may detect characteristic quantities such as a position, a speed, and the like of the object to be trained by emitting a laser beam, thereby obtaining real point cloud data. Meanwhile, scene data are acquired through data acquisition equipment. The collected scene data and the input environment data are both associated training data.
For example, assuming only environmental data is set, the LiDAR device may obtain real point cloud data with the object to be trained in a stationary or preset state. Meanwhile, scene data are acquired through data acquisition equipment. The collected scene data and the input environment data are both associated training data.
Secondly, in the embodiment of the application, a mode of self-defining environmental data is provided, and through the mode, relevant environmental data can be input according to actual requirements, so that the purpose of automatically controlling the environment is achieved, and therefore, the method has a higher automation degree, and in addition, different environmental data can be better adapted to different scenes by setting different environmental data, so that the training authenticity and the adaptation capability are improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in another optional embodiment provided by the embodiment of the present application, the environmental data includes at least one of weather information, temperature, humidity, wind direction, wind power, and ultraviolet index;
receiving environment data through a human-computer interface, specifically comprising the following steps:
if the environmental data comprises meteorological information, receiving a meteorological selection instruction through a human-computer interface, wherein the meteorological selection instruction carries an identifier of the meteorological information;
if the environmental data comprises a temperature, receiving a first parameter aiming at the temperature through a human-computer interface;
if the environmental data comprises humidity, receiving a second parameter aiming at the humidity through a human-computer interface;
if the environment data comprises a wind direction, receiving a wind direction selection instruction through a human-computer interface, wherein the wind direction selection instruction carries an identification of the wind direction;
if the environmental data comprises wind power, receiving a third parameter aiming at the wind power through a human-computer interface;
and if the environment data comprises the ultraviolet index, receiving an intensity selection instruction through a human-computer interface, wherein the intensity instruction carries an identifier of the ultraviolet index.
In this embodiment, a method for customizing weather information, temperature, humidity, wind direction, wind power, and ultraviolet index is introduced. Environmental data can be input via the HMI, which simulates the environments within the test scenario, so that the object to be trained can move or be stationary in the simulated test scenario.
For ease of understanding, referring to fig. 7, fig. 7 is a schematic diagram illustrating the input of environment data through a human-machine interface according to an embodiment of the present application, as shown in the HMI provided with an input area for a test date, an input area for a tester, and an input area for an object identifier, for example, "11/6/2020" input in the input area for the test date. The number of the tester is input in the input area of the tester, and optionally, the number of the tester is not filled in.
Illustratively, selectable items of weather information are also displayed on the HMI, for example, the weather information includes five selectable items, namely "sunny", "rainy", "snow", "cloudy" and "frost", and assuming that the tester triggers a weather selection instruction for the option "snow", an identifier corresponding to "snow" is carried in the weather selection instruction, thereby simulating a snowy day in the test scene. After the input is completed, a 'confirmation' button is clicked, and the configuration of the environment data is completed.
It is understood that snowing has an effect on cameras, LiDAR equipment, and millimeter wave radar. The haze is great to LiDAR equipment's influence, because the wavelength of laser and the particle size of haze are about, can not pierce through the haze, and the camera also can influence the field of vision scope in the haze simultaneously, and the wavelength of millimeter wave is longer, can walk around the haze granule, and the influence that receives is less. The influence of rain on the camera is large, one visible reason is that the lens is blurred, and if the rain is large, the camera in the vehicle can only be relied on. Furthermore, different ratios can be adjusted to simulate the influence of different weather conditions on the sensor, the cloud layer mainly influences illumination change, and the cloud-cast shadow influences lane line identification and the like.
Illustratively, a temperature input area is also displayed on the HMI, for example, a temperature of "-10" is input in the temperature input area, i.e., the first parameter is "-10", so that an environment of 10 degrees celsius at zero is simulated in the test scene.
Illustratively, a humidity input area is also displayed on the HMI, for example, "70" is input in the humidity input area, that is, "70" is the second parameter, thereby simulating an environment with a humidity of 70% in the test scene.
Illustratively, selectable items of wind direction are also displayed on the HMI, for example, the wind direction includes six selectable items, namely "north", "northeast", "northwest", "south", "southeast" and "southwest", and assuming that the tester triggers a wind direction selection instruction for the option "northeast", an identification corresponding to "northeast" is carried in the wind direction selection instruction, thereby simulating an environment of blowing a northeast wind in the test scene.
For example, an input area of wind power is also displayed on the HMI, for example, "8" is input in the input area of wind power, that is, "8" is obtained as the third parameter, thereby simulating an environment of eight-class high wind in the test scene.
It is understood that the wind force may be divided into 13 levels, a level "0" for no wind, a level "1" for soft wind, a level "2" for light wind, a level "3" for light wind, a level "4" for wind, a level "5" for fresh strong wind, a level "6" for strong wind, a level "7" for high wind, a level "8" for strong wind, a level "9" for strong wind, a level "10" for gusty wind, a level "11" for storm wind, and a level "12" for typhoon.
Illustratively, selectable items of the ultraviolet index are also displayed on the HMI, for example, the wind direction includes six selectable items, namely "north", "northeast", "northwest", "south", "southeast" and "southwest", and assuming that the tester triggers the wind direction selection instruction for the option "northeast", the identifier corresponding to "northeast" is carried in the wind direction selection instruction, thereby simulating the environment of blowing the northeast wind in the test scene.
Illustratively, selectable items of the ultraviolet index are also displayed on the HMI, for example, the ultraviolet index includes three selectable items, namely "weak", "medium", and "strong", and assuming that the tester triggers the intensity selection instruction for the item "weak", the identifier corresponding to "weak" is carried in the intensity selection instruction, thereby simulating the environment of weak ultraviolet in the test scene.
After the input is completed, a 'confirmation' button is clicked, and the configuration of the environment data is completed.
In addition, the specific environment data can be set to better adapt to different scenes, and therefore the training authenticity and the training adaptability are improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in another optional embodiment provided in the embodiment of the present application, the environmental data includes a road surface type;
receiving environment data through a human-computer interface, specifically comprising the following steps:
and receiving a type selection instruction through a human-computer interface, wherein the type selection instruction carries an identifier of the road surface type.
In this embodiment, a way of customizing a road surface type is introduced. Environmental data can be entered via the HMI, which simulates these environments within the test scenario, so that the object to be trained can move in the simulated test scenario.
For ease of understanding, referring to fig. 8, fig. 8 is another schematic diagram of the input of environment data through the human-machine interface in the embodiment of the present application, as shown in the HMI provided with an input area for a test date, an input area for a tester, and an input area for an object identification, for example, input "11/6/2020" in the input area for the test date. The number of the tester is input in the input area of the tester, and optionally, the number of the tester is not filled in. In addition, selectable items of road surface types are displayed on the HMI, for example, the road surface types include five selectable items, namely, a highway, a first-level highway, a second-level highway, a third-level highway and a fourth-level highway, and assuming that a tester triggers a type selection instruction for the option of the first-level highway, the type selection instruction carries an identifier corresponding to the first-level highway, so that the road is saved in a test scene.
After the input is completed, a 'confirmation' button is clicked, and the configuration of the environment data is completed.
In addition, the specific environment data can be set to better adapt to different scenes, and therefore the training reality and adaptability are improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in another optional embodiment provided in the embodiment of the present application, the associated training data of the object to be trained includes attribute data, and the attribute data includes at least one of an object type, an object size, and a reaction level;
the method comprises the following steps of obtaining real point cloud data and associated training data of an object to be trained, and specifically comprises the following steps:
acquiring real point cloud data through laser radar equipment;
and acquiring attribute data aiming at the object to be trained through a data input interface.
In this embodiment, a method for customizing attribute data is introduced. If the simulation model training device is deployed in the server, the motion data received by the terminal device through the HMI is received, and if the simulation model training device is deployed in the terminal device, the motion data received by the HMI is directly received, which is not limited herein.
For ease of understanding, referring to fig. 9, fig. 9 is a schematic diagram of attribute data input through a human-machine interface in the embodiment of the present application, as shown in the HMI provided with an input area for a test date, an input area for a tester, an input area for an object identifier, an input area for an object type, an input area for an object size, and selectable items reflecting a level. For example, "11/6/2020/year" is input in the test date input area. The number of the tester is input in the input area of the tester, and optionally, the number of the tester is not filled in. For example, "car", or other vehicle types such as "truck", "bus", or "motorcycle", etc., may be input in the input area of the object type, and "pedestrian (i.e., dummy)" may also be input. For example, "5 × 2.5 × 1.2" is input in the input area of the object size. For example, the reflection level includes five selectable items, which are "extremely slow", "medium", "fast", and "extremely fast", and assuming that the tester triggers a selection instruction for the option "fast", the selection instruction carries an identifier corresponding to "fast", so that the responsiveness of the object to be trained is fast in the test scenario.
After the input is completed, a 'confirmation' button is clicked, and the configuration of the environment data is completed.
In the embodiment of the application, a method for customizing attribute data is provided, and relevant attribute data can be input according to actual requirements through the method, so that parameters can be flexibly configured for an object to be trained, and relevant attributes of the object to be trained can be adapted to different scenes, so that the training reality and adaptability are improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in another optional embodiment provided in the embodiment of the present application, the obtaining of the real point cloud data and the associated training data of the object to be trained specifically includes the following steps:
acquiring real point cloud data to be matched corresponding to an object to be trained, wherein the real point cloud data to be matched comprises M pieces of first data to be matched, each piece of first data to be matched corresponds to a timestamp, and M is an integer greater than or equal to 1;
acquiring associated training data to be matched, wherein the associated training data to be matched comprises M second data to be matched, and each second data to be matched corresponds to a timestamp;
and matching the real point cloud data to be matched and the associated training data to be matched according to the time stamp corresponding to each first data to be matched and the time stamp corresponding to each second data to be matched to obtain matched real point cloud data and associated training data.
In this embodiment, a method for aligning real point cloud data and associated training data is introduced. The method comprises the steps that real point cloud data to be matched corresponding to an object to be trained can be collected through LiDAR equipment, the LiDAR equipment has built-in parameters and external parameters, the built-in parameters comprise LiDAR line number, scanning frequency, measuring precision, distance measuring sampling rate and the like, and the external parameters comprise coordinate information of the LiDAR equipment in the X direction, the Y direction and the Z direction. The LiDAR line number represents that a plurality of transmitters and receivers are arranged in the vertical direction, a plurality of line bundles are obtained through the rotation of a motor, the more the line number is, the more perfect the surface contour of an object is, and the larger the processed data quantity is, the higher the requirement on hardware is. The scan frequency represents the number of times a LiDAR device scans in one second. The measurement accuracy represents the minimum amount of distance change that can be perceived. The ranging sampling rate indicates the number of ranging outputs performed in one second.
The associated training data to be matched can be acquired through the data acquisition equipment, and similarly, the data acquisition equipment also has built-in parameters and external parameters, the built-in parameters comprise data detection frequency and the like, and the external parameters comprise coordinate information of the data acquisition equipment in the X, Y and Z directions.
Based on this, the LiDAR device may identify a frame of real point cloud data to be matched acquired under each timestamp, for example, the real point cloud data to be matched acquired under the first timestamp is "first data to be matched 1", the real point cloud data to be matched acquired under the second timestamp is "first data to be matched 2", and so on until the real point cloud data to be matched acquired under the mth timestamp is obtained, that is, "first data to be matched M" is obtained.
Similarly, the data acquisition device may identify a frame of associated training data to be matched acquired under each timestamp, for example, the associated training data to be matched acquired under the first timestamp is "second data to be matched 1", the associated training data to be matched acquired under the second timestamp is "second data to be matched 2", and so on until the associated training data to be matched acquired under the mth timestamp is obtained, that is, "second data to be matched M" is obtained.
Specifically, for convenience of understanding, please refer to fig. 10, where fig. 10 is a schematic diagram of aligning real point cloud data and associated training data in the embodiment of the present application, as shown in the figure, it is assumed that the real point cloud data to be matched includes M first data to be matched and the associated training data to be matched includes M second data to be matched, and then the first data to be matched and the second data to be matched having the same timestamp are associated, for example, the first data to be matched 1 and the second data to be matched 1 are both associated to the timestamp 1, the first data to be matched 2 and the second data to be matched 2 are both associated to the timestamp 2, and so on until the M first data to be matched and the M second data to be matched are associated. The aligned M first data to be matched are real point cloud data, and the aligned M second data to be matched are associated training data.
In the embodiment of the application, a mode for aligning real point cloud data and associated training data is provided, through the mode, real point cloud data to be matched, which are acquired by LiDAR equipment, can be matched, and associated training data to be matched, which are acquired by data acquisition equipment, so that time synchronization is realized, and therefore, the consistency of training data can be achieved, the precision of a laser radar simulation model is improved, and the deviation between the simulated point cloud data and the real point cloud data is reduced.
Optionally, on the basis of the embodiment corresponding to fig. 3, in another optional embodiment provided in the embodiment of the present application, the determining, by the discriminator, a discrimination result based on the real point cloud data of the object to be trained and the simulated point cloud data of the object to be trained specifically includes the following steps:
acquiring similarity between real point cloud data and simulated point cloud data of an object to be trained through a discriminator based on the real point cloud data of the object to be trained and the simulated point cloud data of the object to be trained;
training the laser radar simulation model to be trained according to the judgment result until the model training condition is met to obtain the laser radar simulation model, and specifically comprising the following steps of:
and if the similarity is greater than or equal to the similarity threshold, determining that the model training condition is met, and taking the laser radar simulation model to be trained as the laser radar simulation model.
In this embodiment, a method for implementing laser radar simulation model training based on a sensing module is introduced. As can be seen from the foregoing embodiments, the GAN includes a generator (i.e., a lidar simulation model) for generating simulation point cloud data and a discriminator for determining the authenticity of the simulation point cloud data, wherein the discriminator may be a perception module.
Specifically, the sensing module uses a deep learning algorithm to realize accurate detection and identification, and taking a discriminator as a deep learning network based on point cloud data as an example, firstly, data processing including denoising, compressing, segmenting, feature extracting and the like is respectively carried out on real point cloud data and simulated point cloud data. The denoising method can make a plane smoother under the condition of keeping details as much as possible, and remove noise brought by equipment. The point cloud data after compression retains important parts, for example, edge data, data of high frequency parts, and the like. The segmentation refers to the step of respectively performing segmentation processing on the real point cloud data and the simulated point cloud data by adopting a semantic segmentation technology, namely obtaining a first region of interest of an object to be trained according to a segmentation result of the real point cloud data, and obtaining a second region of interest of the object to be trained according to a segmentation result of the simulated point cloud data.
Based on the above, the discriminator (i.e. the deep learning network based on the point cloud data) judges the similarity between the first region of interest and the second region of interest, that is, the similarity between the real point cloud data and the simulated point cloud data is obtained, and the similarity is used as a discrimination result. And if the similarity is greater than or equal to the similarity threshold, determining that the model training condition is met, and taking the laser radar simulation model to be trained as the laser radar simulation model. On the contrary, if the similarity is smaller than the similarity threshold, the model training condition is not met, that is, the laser radar simulation model to be trained needs to be trained continuously.
Furthermore, in the embodiment of the application, a mode for realizing laser radar simulation model training based on the perception module is provided, and through the mode, the perception module is used as an important component of the discriminator and used as a trained neural network model, so that the difference between the real point cloud data and the simulated point cloud data can be better identified, and the reliability of the discrimination result is improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in another optional embodiment provided in the embodiment of the present application, the determining, by the discriminator, a discrimination result based on the real point cloud data of the object to be trained and the simulated point cloud data of the object to be trained specifically includes the following steps:
acquiring position information of K first key points according to real point cloud data of an object to be trained, wherein K is an integer greater than or equal to 1;
acquiring position information of K second key points according to simulation point cloud data of an object to be trained, wherein the second key points and the first key points have a mapping relation;
determining the similarity between the real point cloud data and the simulation point cloud data through a discriminator aiming at the position information of K pairs of key points, wherein each pair of key points comprises a first key point and a second key point which have a mapping relation;
training the laser radar simulation model to be trained according to the judgment result until the model training condition is met to obtain the laser radar simulation model, and specifically comprising the following steps of:
and if the similarity is greater than or equal to the similarity threshold, determining that the model training condition is met, and taking the laser radar simulation model to be trained as the laser radar simulation model.
In this embodiment, a method for implementing laser radar simulation model training based on a similarity algorithm is introduced. As can be seen from the foregoing embodiments, the GAN includes a generator (i.e., a lidar simulation model) for generating simulated point cloud data and a discriminator for determining the authenticity of the simulated point cloud data, wherein the discriminator employs a similarity algorithm.
Specifically, for convenience of understanding, please refer to fig. 11, where fig. 11 is a schematic diagram of implementing laser radar simulation model training based on a similarity algorithm in the embodiment of the present application, and as shown in the figure, assuming that an object to be trained is a vehicle, based on real point cloud data and simulated point cloud data of the vehicle, key points at corresponding positions are respectively extracted, taking 6 key points as an example (i.e., K is equal to 6) to extract, wherein a first key point Q1, a first key point Q2, a first key point Q3, a first key point Q4, a first key point Q5, and a first key point Q6 are extracted based on the real point cloud data, and a second key point P1, a second key point P2, a second key point P3, a second key point P4, a second key point P5, and a second key point P6 are extracted based on the simulated point cloud data. Then, a similarity W1 between the first and second keypoints Q1 and P1 is obtained from the position information of the first keypoint Q1 and P1, a similarity W2 between the first and second keypoints is obtained from the position information of the first keypoint Q2 and P2, a similarity W3 between the first and second keypoints Q3 and P3 is obtained from the position information of the first keypoint Q4 and P4, a similarity W4 between the first and second keypoints is obtained from the position information of the first keypoint Q5 and P5, a similarity W5 between the first and second keypoints is obtained from the position information of the first keypoint Q6 and P6, and a similarity W6 between the first and second keypoints is obtained from the position information of the first keypoint Q6 and P6.
Based on the above, the similarity between every two key points can be calculated to obtain an average value, which is the similarity between the real point cloud data and the simulated point cloud data, and the similarity is used as the discrimination result. And if the similarity is greater than or equal to the similarity threshold, determining that the model training condition is met, and taking the laser radar simulation model to be trained as the laser radar simulation model. On the contrary, if the similarity is smaller than the similarity threshold, the model training condition is not met, that is, the laser radar simulation model to be trained needs to be trained continuously.
The position information may be specifically coordinate positions of an X axis, a Y axis, and a Z axis. The similarity may be cosine similarity, pearson correlation coefficient, Jaccard (Jaccard) similarity, or log likelihood similarity, which is not limited herein.
Further, in the embodiment of the application, a method for realizing laser radar simulation model training based on a similarity algorithm is provided, and through the method, the comparison between the real point cloud data and the simulation point cloud data can be efficiently realized by using the similarity algorithm, so that the similarity between the real point cloud data and the simulation point cloud data is obtained, and the reliability of the judgment result is improved.
With reference to the above description, a method for generating point cloud data in the present application will be described below, and referring to fig. 12, an embodiment of the method for generating point cloud data in the present application includes:
201. acquiring associated test data corresponding to a target object, wherein the associated test data comprises at least one of scene data, environment data and attribute data;
in this embodiment, the point cloud data generating device may obtain the related data input by the tester through the HMI, and as can be seen from the foregoing embodiments, the related data includes the motion data and the attribute data for the target object, and may further include the environmental data, so that in the actual test process, the point cloud data generating device may acquire the scene data of the target object through the data acquisition device according to the set motion data.
Specifically, the point cloud data generating means may convert the related data input through the HMI into the form of a feature vector. For convenience of explanation, please refer to fig. 5 again, as can be seen from fig. 5, the motion type input by the tester is "overtaking", that is, the feature vector corresponding to the motion type can be represented as [0,0,1,0], and when the element is "1", the type corresponding to the position of the element is the type selected by the tester. The velocity, acceleration, motion time, etc. may be directly used, and the corresponding feature vector corresponding to the motion data may be represented as [0,0,1,0,36,5,30] based on this. Similarly, referring to fig. 7 again, as can be seen from fig. 7, the weather input by the tester is "snow", that is, the feature vector corresponding to the weather information can be represented as [0,0,1,0,0], the wind direction input by the tester is "northeast", that is, the feature vector corresponding to the wind direction can be represented as [0,1,0,0,0], the ultraviolet index input by the tester is "weak", that is, the feature vector corresponding to the wind direction can be represented as [1,0,0 ].
It should be noted that the point cloud data generating device may be deployed in a server, a terminal device, or a system composed of a server and a terminal device, and the present application is not limited thereto.
202. And generating simulation point cloud data corresponding to the target object through a laser radar simulation model based on the associated test data corresponding to the target object, wherein the laser radar simulation model is obtained by training through the method.
In this embodiment, after the point cloud data generation device obtains the association test data, the association test data is input into the trained lidar simulation model, and the lidar simulation model outputs the simulation point cloud data of the target object.
Optionally, the associated test data may include not only scene data, environment data, and attribute data, but also motion data, at this time, at least one of the motion data, the environment data, and the attribute data input by the tester through the HMI may be input into the trained lidar simulation model, and the lidar simulation model outputs simulation point cloud data of the target object.
Specifically, assuming that a tester inputs associated test data for a target object (e.g., a vehicle) through the HMI at a speed of 36 km/h and an acceleration of 0, the target object may be controlled to travel at a constant speed of 36 km/h along a straight line, and simulation point cloud data of the target object may be output through the lidar simulation model, and may be subsequently applied to an automatic driving algorithm. The automatic driving algorithm generally refers to algorithms such as perception, positioning, decision, planning and control. Generally, point cloud data are directly processed by a sensing module, the sensing module sends processing results (object identification or object detection and the like) to a decision module, the decision module plans a path according to a strategy, and finally a control module controls the vehicle to run (such as acceleration, deceleration or steering and the like).
For easy understanding, please refer to fig. 13, fig. 13 is a schematic flow chart of a point cloud data generation method in an embodiment of the present application, and as shown in the figure, the entire framework is composed of three parts, i.e., a data acquisition part, a GAN training part and a data simulation part, the data acquisition part relates to a plurality of modules, including an HMI dispatching center, traffic scene facilities (e.g., traffic road test sites, various types of vehicles, dummy and obstacles, etc.), LiDAR equipment, data acquisition equipment (e.g., distance measuring devices, speed measuring devices, angle measuring devices, etc.), and a data processing module (e.g., a data fusion module and a data transmission module), etc. The following will be described with reference to specific steps:
in step S1, a specific test scenario is constructed by the HMI, a typical traffic test scenario is constructed in the test field by the HMI dispatch center, for example, the movement of traffic vehicles and dummy is controlled by a driver, vehicles, dummy, and obstacles are disposed on a track, moved along a specific track, and the like.
In step S2, required data acquisition equipment is deployed in the test scene, for example, a distance measuring device is deployed, and the distance between the vehicle, the pedestrian, and the traffic participants such as static obstacles and the like in the test scene and the LiDAR device can be obtained through the distance measuring device. As another example, with a speed measurement device, the speed of movement of dynamic objects (e.g., vehicles and pedestrians) relative to the LiDAR device may be obtained with the speed measurement device.
In step S3, the location of the LiDAR device is fixed within the test arena, and internal parameters of the LiDAR device, including laser radar line counts and scanning frequency, and external parameters, including coordinate information for the X-axis, Y-axis, and Z-axis, are obtained.
In step S4, the data processing module obtains real point cloud data and associated training data, where the associated training data includes scene data collected by the data collection device, environment data and attribute data input by the HMI, and the like. Based on the above, the correlated training information of each traffic element (for example, vehicle, pedestrian, static obstacle, etc.) is correlated and matched with the real point cloud data through the timestamp calibrated in advance and the timestamp synchronization. The information of each traffic element is the true value of the laser point cloud data. And aligning each frame of data, and uploading the aligned data to the GAN training platform by the data transmission module.
It should be noted that the environmental data are parameters for simulating a real environment, for example, when a traffic scene is constructed, artificial rainfall or snowfall and other scenes can be simulated as required, and these data are also used as environmental data and added to the marked truth value vector, so that a generator (i.e., a laser radar simulation model) for subsequent training can also simulate corresponding simulation point cloud data according to different weather settings.
In step S5, the real point cloud data detected by the LiDAR device and the associated training data are treated as truth data.
In step S6, the aligned data is uploaded to the GAN training platform through the data transmission module.
In step S7, the GAN training platform receives real point cloud data detected by the LiDAR device.
In step S8, the GAN training platform receives the associated training data.
In step S9, the discriminator determines the similarity between the real point cloud data and the simulated point cloud data, and if the two are very similar, it indicates that the training is completed. The discriminator can be a sensing module trained based on a deep learning network, or a point cloud data similarity judging module used for judging the difference between simulated point cloud data and real point cloud data, and when the similarity between the simulated point cloud data and the real point cloud data is high, the training is finished.
In step S10, the associated training data is input to the generator (i.e., the lidar simulation model).
In step S11, the simulation point cloud data is output by the generator (i.e., the lidar simulation model), and the output simulation is input to the discriminator.
In step S12, the GAN includes a simulated lidar data generator (i.e., lidar simulation model) and a discriminator, and the training is ended and the generator (i.e., lidar simulation model) is acquired when the discriminator cannot discriminate whether the simulated point cloud data is true or false.
In step S13, the trained generators (i.e., LiDAR simulation models) are ultimately deployed into an automated driving simulation system, so that real LiDAR devices may be simulated in a virtual environment.
The embodiment of the application provides a point cloud data generation method, which includes the steps of firstly obtaining correlation test data corresponding to a target object, and then generating simulation point cloud data corresponding to the target object through a laser radar simulation model based on the correlation test data corresponding to the target object. Through the mode, at least one of scene data, environment data and attribute data is introduced to serve as a training parameter in the process of training the laser radar simulation model, so that the laser radar simulation model can learn the influence of a complex real environment on the generation of point cloud data from more layers, the precision of the laser radar simulation model is improved, and the deviation between the simulation point cloud data and the real point cloud data is reduced.
Referring to fig. 14, fig. 14 is a schematic diagram of an embodiment of a simulation model training apparatus in an embodiment of the present application, and the simulation model training apparatus 30 includes:
an obtaining module 301, configured to obtain real point cloud data and associated training data of an object to be trained, where the associated training data has a corresponding relationship with the real point cloud data, and the associated training data includes at least one of scene data, environment data, and attribute data;
the obtaining module 301 is further configured to obtain simulation point cloud data of the object to be trained through the laser radar simulation model to be trained based on the associated training data of the object to be trained;
a determining module 302, configured to determine a determination result through a discriminator based on real point cloud data of an object to be trained and simulated point cloud data of the object to be trained;
and the training module 303 is used for training the laser radar simulation model to be trained according to the judgment result until the model training condition is met, so as to obtain the laser radar simulation model.
In the embodiment of the application, a simulation model training device is provided, and by adopting the device, in the process of training the laser radar simulation model, at least one of scene data, environment data and attribute data is introduced as a training parameter, so that the laser radar simulation model can learn the influence of a complex real environment on point cloud data generation from more layers, thereby being beneficial to improving the precision of the laser radar simulation model and reducing the deviation between the simulation point cloud data and the real point cloud data.
Optionally, on the basis of the embodiment corresponding to fig. 14, in another embodiment of the simulation model training apparatus 30 provided in the embodiment of the present application, the associated training data of the object to be trained includes scene data;
an obtaining module 301, specifically configured to receive motion data for an object to be trained through a human-computer interface;
controlling the object to be trained to move in the test scene based on the motion data;
acquiring real point cloud data through laser radar equipment based on the motion condition of an object to be trained in a test scene;
scene data are acquired through data acquisition equipment based on the motion condition of an object to be trained in a test scene.
In the embodiment of the application, a simulation model training device is provided, adopt above-mentioned device, can input and treat the motion data that the training object is relevant according to actual demand, reach the purpose that automated control treats the motion of training object from this, consequently, have higher degree of automation, in addition, set up different motion situations and can adapt to different scenes better to promote the authenticity and the adaptability of training.
Optionally, on the basis of the embodiment corresponding to fig. 14, in another embodiment of the simulation model training device 30 provided in the embodiment of the present application, the scene data includes at least one of a distance, a speed, and an included angle;
the obtaining module 301 is specifically configured to, if the scene data includes a distance, obtain, by using a distance measuring device, a distance between an object to be trained and the laser radar apparatus based on a motion condition of the object to be trained in a test scene;
if the scene data comprises the speed, acquiring the speed of the object to be trained relative to the laser radar equipment through a speed measuring device based on the motion condition of the object to be trained in the test scene;
and if the scene data comprises the included angle, acquiring the included angle of the object to be trained relative to the laser radar equipment through the angle measuring device based on the motion condition of the object to be trained in the test scene.
In the embodiment of the application, the simulation model training device is provided, and by adopting the device, corresponding scene data can be captured by using a relevant device, model training is carried out by using the scene data, and the influence of the motion condition of an object on the generation of point cloud data can be learned, so that the precision of a laser radar simulation model is improved, and the deviation between the simulation point cloud data and the real point cloud data is reduced.
Optionally, on the basis of the embodiment corresponding to fig. 14, in another embodiment of the simulation model training apparatus 30 provided in the embodiment of the present application, the associated training data of the object to be trained includes environment data;
the acquisition module 301 is specifically configured to acquire real point cloud data through a laser radar device;
the environmental data is received through a human-machine interface.
In the embodiment of the application, a simulation model training device is provided, adopt above-mentioned device, can be according to the relevant environmental data of actual demand input, reach the purpose of automated control environment from this, consequently, have higher degree of automation, in addition, set up different environmental data and can adapt to different scenes better to the authenticity and the adaptability of promotion training.
Optionally, on the basis of the embodiment corresponding to fig. 14, in another embodiment of the simulation model training apparatus 30 provided in the embodiment of the present application, the environmental data includes at least one of weather information, temperature, humidity, wind direction, wind power, and ultraviolet index;
the obtaining module 301 is specifically configured to receive a weather selection instruction through a human-computer interface if the environment data includes weather information, where the weather selection instruction carries an identifier of the weather information;
if the environmental data comprises a temperature, receiving a first parameter aiming at the temperature through a human-computer interface;
if the environmental data comprises humidity, receiving a second parameter aiming at the humidity through a human-computer interface;
if the environment data comprises a wind direction, receiving a wind direction selection instruction through a human-computer interface, wherein the wind direction selection instruction carries an identification of the wind direction;
if the environmental data comprises wind power, receiving a third parameter aiming at the wind power through a human-computer interface;
and if the environment data comprises the ultraviolet index, receiving an intensity selection instruction through a human-computer interface, wherein the intensity instruction carries an identifier of the ultraviolet index.
In the embodiment of the application, a simulation model training device is provided, adopt above-mentioned device, can be according to the concrete environmental data of actual demand input, reach the purpose of automated control environment from this, consequently, have higher degree of automation, in addition, set up concrete environmental data and can adapt to different scenes better to the authenticity and the adaptability of training promote.
Optionally, on the basis of the embodiment corresponding to fig. 14, in another embodiment of the simulation model training apparatus 30 provided in the embodiment of the present application, the environmental data includes a road surface type;
the obtaining module 301 is specifically configured to receive a type selection instruction through a human-computer interface, where the type selection instruction carries an identifier of a road surface type.
In the embodiment of the application, a simulation model training device is provided, adopt above-mentioned device, can be according to the concrete environmental data of actual demand input, reach the purpose of automated control environment from this, consequently, have higher degree of automation, in addition, set up concrete environmental data and can adapt to different scenes better to the authenticity and the adaptability of training promote.
Optionally, on the basis of the embodiment corresponding to fig. 14, in another embodiment of the simulation model training apparatus 30 provided in the embodiment of the present application, the associated training data of the object to be trained includes attribute data, and the attribute data includes at least one of an object type, an object size, and a reaction level;
the acquisition module 301 is specifically configured to acquire real point cloud data through a laser radar device;
and acquiring attribute data aiming at the object to be trained through a data input interface.
In the embodiment of the application, the simulation model training device is provided, and by adopting the device, relevant attribute data can be input according to actual requirements, so that parameters can be flexibly configured for an object to be trained, and the relevant attributes of the object to be trained can be adapted to different scenes, so that the training reality and adaptability are improved.
Alternatively, on the basis of the embodiment corresponding to fig. 14, in another embodiment of the simulation model training device 30 provided in the embodiment of the present application,
the acquiring module 301 is specifically configured to acquire real point cloud data to be matched corresponding to an object to be trained, where the real point cloud data to be matched includes M first data to be matched, each first data to be matched corresponds to a timestamp, and M is an integer greater than or equal to 1;
acquiring associated training data to be matched, wherein the associated training data to be matched comprises M second data to be matched, and each second data to be matched corresponds to a timestamp;
and matching the real point cloud data to be matched and the associated training data to be matched according to the time stamp corresponding to each first data to be matched and the time stamp corresponding to each second data to be matched to obtain matched real point cloud data and associated training data.
In the embodiment of the application, a point cloud data generation device is provided, and by adopting the device, real point cloud data to be matched and collected by LiDAR equipment can be matched, associated training data to be matched and collected by data collection equipment are matched, and time synchronization is realized, so that the consistency of the training data can be achieved, the precision of a laser radar simulation model is improved, and the deviation between the simulated point cloud data and the real point cloud data is reduced.
Alternatively, on the basis of the embodiment corresponding to fig. 14, in another embodiment of the simulation model training device 30 provided in the embodiment of the present application,
the determining module 302 is specifically configured to obtain, by a discriminator, a similarity between real point cloud data and simulated point cloud data of an object to be trained based on the real point cloud data of the object to be trained and the simulated point cloud data of the object to be trained;
the training module 303 is specifically configured to determine that a model training condition is satisfied if the similarity is greater than or equal to the similarity threshold, and use the lidar simulation model to be trained as the lidar simulation model.
In the embodiment of the application, the point cloud data generation device is provided, and by adopting the device, the sensing module is used as an important component of the discriminator and used as a trained neural network model, so that the difference between real point cloud data and simulated point cloud data can be better identified, and the reliability of the discrimination result is improved.
Alternatively, on the basis of the embodiment corresponding to fig. 14, in another embodiment of the simulation model training device 30 provided in the embodiment of the present application,
a determining module 302, configured to obtain location information of K first key points according to real point cloud data of an object to be trained, where K is an integer greater than or equal to 1;
acquiring position information of K second key points according to simulation point cloud data of an object to be trained, wherein the second key points and the first key points have a mapping relation;
determining the similarity between the real point cloud data and the simulation point cloud data through a discriminator aiming at the position information of K pairs of key points, wherein each pair of key points comprises a first key point and a second key point which have a mapping relation;
the training module 303 is specifically configured to determine that a model training condition is satisfied if the similarity is greater than or equal to the similarity threshold, and use the lidar simulation model to be trained as the lidar simulation model.
In the embodiment of the application, the point cloud data generation device is provided, and by adopting the device, the comparison between the real point cloud data and the simulation point cloud data can be efficiently realized by using a similarity algorithm, so that the similarity between the real point cloud data and the simulation point cloud data is obtained, and the reliability of a judgment result is improved.
Referring to fig. 15, fig. 15 is a schematic view of an embodiment of a point cloud data generating apparatus in an embodiment of the present application, and a point cloud data generating apparatus 40 includes:
an obtaining module 401, configured to obtain associated test data corresponding to a target object, where the associated test data includes at least one of scene data, environment data, and attribute data;
a generating module 402, configured to generate, based on the associated test data corresponding to the target object, simulation point cloud data corresponding to the target object through a laser radar simulation model, where the laser radar simulation model is obtained by using the above-mentioned training method.
In the embodiment of the application, a point cloud data generation device is provided, and by adopting the device, at least one of scene data, environment data and attribute data is introduced as a training parameter in the process of training a laser radar simulation model, so that the laser radar simulation model can learn the influence of a complex real environment on point cloud data generation from more layers, thereby being beneficial to improving the precision of the laser radar simulation model and reducing the deviation between simulation point cloud data and real point cloud data.
Referring to fig. 16, fig. 16 is a schematic structural diagram of a server provided in an embodiment of the present disclosure, where the server 500 may generate relatively large differences due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 522 (e.g., one or more processors) and a memory 532, and one or more storage media 530 (e.g., one or more mass storage devices) storing an application program 542 or data 544. Memory 532 and storage media 530 may be, among other things, transient storage or persistent storage. The program stored on the storage medium 530 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 522 may be configured to communicate with the storage medium 530, and execute a series of instruction operations in the storage medium 530 on the server 500.
The Server 500 may also include one or more power supplies 526, one or more wired or wireless network interfaces 550, one or more input-output interfaces 558, and/or one or more operating systems 541, such as a Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTMAnd so on.
In the embodiment of the present application, the CPU 522 included in the server also has the following functions:
acquiring real point cloud data and associated training data of an object to be trained, wherein the associated training data and the real point cloud data have a corresponding relation, and the associated training data comprises at least one of scene data, environment data and attribute data;
acquiring simulation point cloud data of the object to be trained through a laser radar simulation model to be trained based on the associated training data of the object to be trained;
determining a discrimination result through a discriminator based on real point cloud data of an object to be trained and simulated point cloud data of the object to be trained;
and training the laser radar simulation model to be trained according to the judgment result until the model training condition is met, so as to obtain the laser radar simulation model.
In the embodiment of the present application, the CPU 522 included in the server also has the following functions:
acquiring associated test data corresponding to a target object, wherein the associated test data comprises at least one of scene data, environment data and attribute data;
and generating simulation point cloud data corresponding to the target object through a laser radar simulation model based on the associated test data corresponding to the target object.
As shown in fig. 17, for convenience of description, only the portions related to the embodiments of the present application are shown, and specific technical details are not disclosed, please refer to the method portion of the embodiments of the present application. In the embodiment of the present application, a terminal device is taken as an example to explain:
fig. 17 is a block diagram illustrating a partial structure of a smartphone related to a terminal device provided in an embodiment of the present application. Referring to fig. 17, the smart phone includes: radio Frequency (RF) circuitry 610, memory 620, input unit 630, display unit 640, sensor 650, audio circuitry 660, wireless fidelity (WiFi) module 670, processor 680, and power supply 690. Those skilled in the art will appreciate that the smartphone configuration shown in fig. 17 is not intended to be limiting, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The following describes each component of the smartphone in detail with reference to fig. 17:
the RF circuit 610 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then processes the received downlink information to the processor 680; in addition, the data for designing uplink is transmitted to the base station. In general, the RF circuit 610 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 610 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), etc.
The memory 620 may be used to store software programs and modules, and the processor 680 may execute various functional applications and data processing of the smart phone by operating the software programs and modules stored in the memory 620. The memory 620 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the smartphone, and the like. Further, the memory 620 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 630 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the smartphone. Specifically, the input unit 630 may include a touch panel 631 and other input devices 632. The touch panel 631, also referred to as a touch screen, may collect touch operations of a user (e.g., operations of the user on the touch panel 631 or near the touch panel 631 by using any suitable object or accessory such as a finger or a stylus) thereon or nearby, and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 631 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 680, and can receive and execute commands sent by the processor 680. In addition, the touch panel 631 may be implemented using various types, such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 630 may include other input devices 632 in addition to the touch panel 631. In particular, other input devices 632 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 640 may be used to display information input by or provided to the user and various menus of the smartphone. The display unit 640 may include a display panel 641, and optionally, the display panel 641 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like. Further, the touch panel 631 can cover the display panel 641, and when the touch panel 631 detects a touch operation thereon or nearby, the touch panel is transmitted to the processor 680 to determine the type of the touch event, and then the processor 680 provides a corresponding visual output on the display panel 641 according to the type of the touch event. Although in fig. 17, the touch panel 631 and the display panel 641 are two separate components to implement the input and output functions of the smart phone, in some embodiments, the touch panel 631 and the display panel 641 may be integrated to implement the input and output functions of the smart phone.
The smartphone may also include at least one sensor 650, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 641 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 641 and/or the backlight when the smartphone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration) for recognizing the attitude of the smartphone, and related functions (such as pedometer and tapping) for vibration recognition; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the smart phone, further description is omitted here.
Audio circuit 660, speaker 661, microphone 662 can provide an audio interface between the user and the smartphone. The audio circuit 660 may transmit the electrical signal converted from the received audio data to the speaker 661, and convert the electrical signal into an audio signal through the speaker 661 for output; on the other hand, the microphone 662 converts the collected sound signals into electrical signals, which are received by the audio circuit 660 and converted into audio data, which are processed by the audio data output processor 680 and then passed through the RF circuit 610 to be sent to, for example, another smartphone or output to the memory 620 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the smart phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 670, and provides wireless broadband internet access for the user. Although fig. 17 shows the WiFi module 670, it is understood that it does not belong to the essential constitution of the smartphone and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 680 is a control center of the smart phone, connects various parts of the entire smart phone using various interfaces and lines, and performs various functions of the smart phone and processes data by operating or executing software programs and/or modules stored in the memory 620 and calling data stored in the memory 620, thereby integrally monitoring the smart phone. Optionally, processor 680 may include one or more processing units; optionally, the processor 680 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 680.
The smartphone also includes a power supply 690 (e.g., a battery) that provides power to the various components, optionally, the power supply may be logically connected to the processor 680 via a power management system, so that functions such as managing charging, discharging, and power consumption are implemented via the power management system.
Although not shown, the smart phone may further include a camera, a bluetooth module, and the like, which are not described herein.
In this embodiment, the processor 680 included in the terminal device further has the following functions:
acquiring real point cloud data and associated training data of an object to be trained, wherein the associated training data and the real point cloud data have a corresponding relation, and the associated training data comprises at least one of scene data, environment data and attribute data;
acquiring simulation point cloud data of the object to be trained through a laser radar simulation model to be trained based on the associated training data of the object to be trained;
determining a discrimination result through a discriminator based on real point cloud data of an object to be trained and simulated point cloud data of the object to be trained;
and training the laser radar simulation model to be trained according to the judgment result until the model training condition is met, so as to obtain the laser radar simulation model.
In this embodiment, the processor 680 included in the terminal device further has the following functions:
acquiring associated test data corresponding to a target object, wherein the associated test data comprises at least one of scene data, environment data and attribute data;
and generating simulation point cloud data corresponding to the target object through a laser radar simulation model based on the associated test data corresponding to the target object.
Embodiments of the present application also provide a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the method described in the foregoing embodiments.
Embodiments of the present application also provide a computer program product including a program, which, when run on a computer, causes the computer to perform the methods described in the foregoing embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (15)

1. A training method of a simulation model is characterized by comprising the following steps:
acquiring real point cloud data and associated training data of an object to be trained, wherein the associated training data has a corresponding relation with the real point cloud data, and the associated training data comprises at least one of scene data, environment data and attribute data;
based on the associated training data of the object to be trained, acquiring simulation point cloud data of the object to be trained through a laser radar simulation model to be trained;
determining a discrimination result through a discriminator based on the real point cloud data of the object to be trained and the simulation point cloud data of the object to be trained;
and training the laser radar simulation model to be trained according to the judgment result until model training conditions are met, so as to obtain the laser radar simulation model.
2. The training method according to claim 1, wherein the associated training data of the object to be trained includes the scene data;
the acquiring of the real point cloud data and the associated training data of the object to be trained comprises the following steps:
receiving motion data aiming at the object to be trained through a human-computer interface;
controlling the object to be trained to move in a test scene based on the motion data;
acquiring the real point cloud data through laser radar equipment based on the motion condition of the object to be trained in the test scene;
and acquiring the scene data through data acquisition equipment based on the motion condition of the object to be trained in the test scene.
3. The training method of claim 2, wherein the scene data comprises at least one of a distance, a speed, and an angle;
the acquiring the scene data through data acquisition equipment based on the motion condition of the object to be trained in the test scene comprises:
if the scene data comprises the distance, acquiring the distance between the object to be trained and the laser radar equipment through a distance measuring device based on the motion condition of the object to be trained in the test scene;
if the scene data comprises speed, acquiring the speed of the object to be trained relative to the laser radar equipment through a speed measuring device based on the motion condition of the object to be trained in the test scene;
and if the scene data comprises the included angle, acquiring the included angle of the object to be trained relative to the laser radar equipment through an angle measuring device based on the motion condition of the object to be trained in the test scene.
4. A training method as claimed in any one of claims 1 to 3, characterized in that the associated training data of the object to be trained comprises the environmental data;
the acquiring of the real point cloud data and the associated training data of the object to be trained comprises the following steps:
acquiring the real point cloud data through laser radar equipment;
and receiving the environment data through a human-computer interface.
5. The training method of claim 4, wherein the environmental data comprises at least one of weather information, temperature, humidity, wind direction, wind power, and ultraviolet index;
the receiving the environmental data through the human-computer interface includes:
if the environment data comprises meteorological information, receiving a meteorological selection instruction through the human-computer interface, wherein the meteorological selection instruction carries an identifier of the meteorological information;
if the environmental data comprises a temperature, receiving a first parameter aiming at the temperature through the human-computer interface;
if the environmental data comprises humidity, receiving a second parameter aiming at the humidity through the human-computer interface;
if the environment data comprises a wind direction, receiving a wind direction selection instruction through the human-computer interface, wherein the wind direction selection instruction carries an identifier of the wind direction;
if the environmental data comprises wind power, receiving a third parameter aiming at the wind power through the human-computer interface;
and if the environment data comprises the ultraviolet index, receiving an intensity selection instruction through the human-computer interface, wherein the intensity instruction carries an identifier of the ultraviolet index.
6. Training method according to claim 4, wherein the environmental data comprise a road surface type;
the receiving the environmental data through the human-computer interface includes:
and receiving a type selection instruction through the human-computer interface, wherein the type selection instruction carries the mark of the road surface type.
7. A training method as claimed in claim 1, wherein the associated training data of the object to be trained comprises the attribute data, the attribute data comprising at least one of an object type, an object size, and a level of reaction;
the acquiring of the real point cloud data and the associated training data of the object to be trained comprises the following steps:
acquiring the real point cloud data through laser radar equipment;
and acquiring the attribute data aiming at the object to be trained through a data input interface.
8. The training method according to claim 1, wherein the acquiring of the real point cloud data of the object to be trained and the associated training data comprises:
acquiring real point cloud data to be matched corresponding to the object to be trained, wherein the real point cloud data to be matched comprises M pieces of first data to be matched, each piece of first data to be matched corresponds to a timestamp, and M is an integer greater than or equal to 1;
acquiring associated training data to be matched, wherein the associated training data to be matched comprises M second data to be matched, and each second data to be matched corresponds to a timestamp;
and matching the real point cloud data to be matched and the associated training data to be matched according to the time stamp corresponding to each first data to be matched and the time stamp corresponding to each second data to be matched to obtain the matched real point cloud data and the associated training data.
9. The training method according to any one of claims 1 to 8, wherein the determining a discrimination result by a discriminator based on real point cloud data of the object to be trained and simulated point cloud data of the object to be trained comprises:
based on the real point cloud data of the object to be trained and the simulation point cloud data of the object to be trained, acquiring the similarity between the real point cloud data and the simulation point cloud data through the discriminator;
the training of the laser radar simulation model to be trained according to the judgment result until model training conditions are met to obtain the laser radar simulation model comprises the following steps:
and if the similarity is greater than or equal to a similarity threshold value, determining that the model training condition is met, and taking the laser radar simulation model to be trained as the laser radar simulation model.
10. The training method according to any one of claims 1 to 8, wherein the determining a discrimination result by a discriminator based on real point cloud data of the object to be trained and simulated point cloud data of the object to be trained comprises:
acquiring position information of K first key points according to the real point cloud data of the object to be trained, wherein K is an integer greater than or equal to 1;
acquiring position information of K second key points according to the simulation point cloud data of the object to be trained, wherein the second key points and the first key points have a mapping relation;
determining, by the discriminator, a similarity between the real point cloud data and the simulated point cloud data for position information of K pairs of key points, wherein each pair of key points includes the first key point and the second key point having a mapping relationship;
the training of the laser radar simulation model to be trained according to the judgment result until model training conditions are met to obtain the laser radar simulation model comprises the following steps:
and if the similarity is greater than or equal to a similarity threshold value, determining that the model training condition is met, and taking the laser radar simulation model to be trained as the laser radar simulation model.
11. A method for generating point cloud data, comprising:
acquiring associated test data corresponding to a target object, wherein the associated test data comprises at least one of scene data, environment data and attribute data;
generating simulation point cloud data corresponding to the target object through a laser radar simulation model based on the associated test data corresponding to the target object, wherein the laser radar simulation model is obtained by adopting the training method of any one of claims 1 to 10.
12. A simulation model training apparatus, comprising:
the device comprises an acquisition module, a storage module and a training module, wherein the acquisition module is used for acquiring real point cloud data and associated training data of an object to be trained, the associated training data and the real point cloud data have a corresponding relation, and the associated training data comprises at least one of scene data, environment data and attribute data;
the acquisition module is further used for acquiring simulation point cloud data of the object to be trained through a laser radar simulation model to be trained based on the associated training data of the object to be trained;
the determining module is used for determining a judgment result through a discriminator based on the real point cloud data of the object to be trained and the simulation point cloud data of the object to be trained;
and the training module is used for training the laser radar simulation model to be trained according to the judgment result until model training conditions are met, so as to obtain the laser radar simulation model.
13. A point cloud data generation device, comprising:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring associated test data corresponding to a target object, and the associated test data comprises at least one of scene data, environment data and attribute data;
a generating module, configured to generate, based on the associated test data corresponding to the target object, simulation point cloud data corresponding to the target object through a laser radar simulation model, where the laser radar simulation model is obtained by using the training method according to any one of claims 1 to 10.
14. A computer device, comprising: a memory, a processor, and a bus system;
wherein the memory is used for storing programs;
the processor is configured to execute a program in the memory, the processor is configured to execute the training method of any one of claims 1 to 10 or the generating method of claim 11 according to instructions in program code;
the bus system is used for connecting the memory and the processor so as to enable the memory and the processor to communicate.
15. A computer-readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the training method of any one of claims 1 to 10 or the generating method of claim 11.
CN202011254212.4A 2020-11-11 2020-11-11 Simulation model training method and point cloud data generation method and device Active CN112256589B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011254212.4A CN112256589B (en) 2020-11-11 2020-11-11 Simulation model training method and point cloud data generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011254212.4A CN112256589B (en) 2020-11-11 2020-11-11 Simulation model training method and point cloud data generation method and device

Publications (2)

Publication Number Publication Date
CN112256589A true CN112256589A (en) 2021-01-22
CN112256589B CN112256589B (en) 2022-02-01

Family

ID=74265236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011254212.4A Active CN112256589B (en) 2020-11-11 2020-11-11 Simulation model training method and point cloud data generation method and device

Country Status (1)

Country Link
CN (1) CN112256589B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113053223A (en) * 2021-02-25 2021-06-29 深圳市讯方技术股份有限公司 Automatic driving experiment teaching method, vehicle model and system thereof
CN113642681A (en) * 2021-10-13 2021-11-12 中国空气动力研究与发展中心低速空气动力研究所 Matching method of aircraft model surface mark points
CN113822892A (en) * 2021-11-24 2021-12-21 腾讯科技(深圳)有限公司 Evaluation method, device and equipment of simulated radar and computer program product
CN114353799A (en) * 2021-12-30 2022-04-15 武汉大学 Indoor rapid global positioning method for unmanned platform carrying multi-line laser radar
CN114386293A (en) * 2022-03-22 2022-04-22 之江实验室 Virtual-real synthesized laser radar point cloud generation method and device
US11392132B2 (en) * 2018-07-24 2022-07-19 Pony Ai Inc. Generative adversarial network enriched driving simulation
CN115225695A (en) * 2022-07-15 2022-10-21 阿波罗智能技术(北京)有限公司 Radar message sending method, device, equipment, medium and program product
CN116152770A (en) * 2023-04-19 2023-05-23 深圳佑驾创新科技有限公司 3D target matching model building method and device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862293A (en) * 2017-09-14 2018-03-30 北京航空航天大学 Radar based on confrontation generation network generates colored semantic image system and method
US20190065933A1 (en) * 2017-08-31 2019-02-28 Ford Global Technologies, Llc Augmenting Real Sensor Recordings With Simulated Sensor Data
CN109598066A (en) * 2018-12-05 2019-04-09 百度在线网络技术(北京)有限公司 Effect evaluation method, device, equipment and the storage medium of prediction module
US20190197778A1 (en) * 2017-12-21 2019-06-27 Luminar Technologies, Inc. Object identification and labeling tool for training autonomous vehicle controllers
CN110163048A (en) * 2018-07-10 2019-08-23 腾讯科技(深圳)有限公司 Identification model training method, recognition methods and the equipment of hand key point
CN110322416A (en) * 2019-07-09 2019-10-11 腾讯科技(深圳)有限公司 Image processing method, device and computer readable storage medium
CN110490960A (en) * 2019-07-11 2019-11-22 阿里巴巴集团控股有限公司 A kind of composograph generation method and device
CN110705101A (en) * 2019-09-30 2020-01-17 深圳市商汤科技有限公司 Network training method, vehicle driving method and related product
US20200058156A1 (en) * 2018-08-17 2020-02-20 Nec Laboratories America, Inc. Dense three-dimensional correspondence estimation with multi-level metric learning and hierarchical matching
WO2020103700A1 (en) * 2018-11-21 2020-05-28 腾讯科技(深圳)有限公司 Image recognition method based on micro facial expressions, apparatus and related device
CN111339642A (en) * 2020-02-13 2020-06-26 创新奇智(合肥)科技有限公司 Simulation model calibration method, system, readable medium and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190065933A1 (en) * 2017-08-31 2019-02-28 Ford Global Technologies, Llc Augmenting Real Sensor Recordings With Simulated Sensor Data
CN107862293A (en) * 2017-09-14 2018-03-30 北京航空航天大学 Radar based on confrontation generation network generates colored semantic image system and method
US20190197778A1 (en) * 2017-12-21 2019-06-27 Luminar Technologies, Inc. Object identification and labeling tool for training autonomous vehicle controllers
CN110163048A (en) * 2018-07-10 2019-08-23 腾讯科技(深圳)有限公司 Identification model training method, recognition methods and the equipment of hand key point
US20200058156A1 (en) * 2018-08-17 2020-02-20 Nec Laboratories America, Inc. Dense three-dimensional correspondence estimation with multi-level metric learning and hierarchical matching
WO2020103700A1 (en) * 2018-11-21 2020-05-28 腾讯科技(深圳)有限公司 Image recognition method based on micro facial expressions, apparatus and related device
CN109598066A (en) * 2018-12-05 2019-04-09 百度在线网络技术(北京)有限公司 Effect evaluation method, device, equipment and the storage medium of prediction module
CN110322416A (en) * 2019-07-09 2019-10-11 腾讯科技(深圳)有限公司 Image processing method, device and computer readable storage medium
CN110490960A (en) * 2019-07-11 2019-11-22 阿里巴巴集团控股有限公司 A kind of composograph generation method and device
CN110705101A (en) * 2019-09-30 2020-01-17 深圳市商汤科技有限公司 Network training method, vehicle driving method and related product
CN111339642A (en) * 2020-02-13 2020-06-26 创新奇智(合肥)科技有限公司 Simulation model calibration method, system, readable medium and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
INFERENCE: ""Understanding Minibatch Discrimination in GANs"", 《HTTPS://WWW.INFERENCE.VC/UNDERSTANDING-MINIBATCH-DISCRIMINATION-IN-GANS/》 *
陈诚: ""通俗理解生成对抗网络GAN"", 《HTTPS://ZHUANLAN.ZHIHU.COM/P/33752313》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11774978B2 (en) 2018-07-24 2023-10-03 Pony Ai Inc. Generative adversarial network enriched driving simulation
US11392132B2 (en) * 2018-07-24 2022-07-19 Pony Ai Inc. Generative adversarial network enriched driving simulation
CN113053223A (en) * 2021-02-25 2021-06-29 深圳市讯方技术股份有限公司 Automatic driving experiment teaching method, vehicle model and system thereof
CN113642681A (en) * 2021-10-13 2021-11-12 中国空气动力研究与发展中心低速空气动力研究所 Matching method of aircraft model surface mark points
CN113642681B (en) * 2021-10-13 2022-01-04 中国空气动力研究与发展中心低速空气动力研究所 Matching method of aircraft model surface mark points
CN113822892A (en) * 2021-11-24 2021-12-21 腾讯科技(深圳)有限公司 Evaluation method, device and equipment of simulated radar and computer program product
CN114353799B (en) * 2021-12-30 2023-09-05 武汉大学 Indoor rapid global positioning method for unmanned platform carrying multi-line laser radar
CN114353799A (en) * 2021-12-30 2022-04-15 武汉大学 Indoor rapid global positioning method for unmanned platform carrying multi-line laser radar
CN114386293B (en) * 2022-03-22 2022-07-08 之江实验室 Virtual-real synthesized laser radar point cloud generation method and device
CN114386293A (en) * 2022-03-22 2022-04-22 之江实验室 Virtual-real synthesized laser radar point cloud generation method and device
CN115225695A (en) * 2022-07-15 2022-10-21 阿波罗智能技术(北京)有限公司 Radar message sending method, device, equipment, medium and program product
CN115225695B (en) * 2022-07-15 2024-03-01 阿波罗智能技术(北京)有限公司 Radar message sending method, device, equipment, medium and program product
CN116152770A (en) * 2023-04-19 2023-05-23 深圳佑驾创新科技有限公司 3D target matching model building method and device
CN116152770B (en) * 2023-04-19 2023-09-22 深圳佑驾创新科技股份有限公司 3D target matching model building method and device

Also Published As

Publication number Publication date
CN112256589B (en) 2022-02-01

Similar Documents

Publication Publication Date Title
CN112256589B (en) Simulation model training method and point cloud data generation method and device
US11487288B2 (en) Data synthesis for autonomous control systems
CN109459734B (en) Laser radar positioning effect evaluation method, device, equipment and storage medium
CN108955702B (en) Lane-level map creation system based on three-dimensional laser and GPS inertial navigation system
CN108226924B (en) Automobile driving environment detection method and device based on millimeter wave radar and application of automobile driving environment detection method and device
CN112639882B (en) Positioning method, device and system
CN111291697B (en) Method and device for detecting obstacles
CN110044371A (en) A kind of method and vehicle locating device of vehicle location
JP2018534603A (en) High-precision map data processing method, apparatus, storage medium and equipment
CN108469817B (en) Unmanned ship obstacle avoidance control system based on FPGA and information fusion
CN112305559A (en) Power transmission line distance measuring method, device and system based on ground fixed-point laser radar scanning and electronic equipment
CN112163280B (en) Method, device and equipment for simulating automatic driving scene and storage medium
CN110210384B (en) Road global information real-time extraction and representation system
WO2020112122A1 (en) Interactive virtual interface
US11961272B2 (en) Long range localization with surfel maps
CN110083099A (en) One kind meeting automobile function safety standard automatic Pilot architecture system and working method
CN114295139A (en) Cooperative sensing positioning method and system
US20220234588A1 (en) Data Recording for Advanced Driving Assistance System Testing and Validation
CN113820694A (en) Simulation ranging method, related device, equipment and storage medium
CN113899405A (en) Integrated online slope intelligent monitoring and early warning system and early warning method
CN117029840A (en) Mobile vehicle positioning method and system
JP2023181990A (en) Neural network model training method and image generation method
CN206671562U (en) A kind of driver's driving behavior data collecting system based on laser sensor
CN115657494A (en) Virtual object simulation method, device, equipment and storage medium
CN113076830A (en) Environment passing area detection method and device, vehicle-mounted terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40038183

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant