CN115876210A - Map data generation method and device, electronic equipment and storage medium - Google Patents

Map data generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115876210A
CN115876210A CN202211521252.XA CN202211521252A CN115876210A CN 115876210 A CN115876210 A CN 115876210A CN 202211521252 A CN202211521252 A CN 202211521252A CN 115876210 A CN115876210 A CN 115876210A
Authority
CN
China
Prior art keywords
point cloud
vehicle
target
determining
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211521252.XA
Other languages
Chinese (zh)
Inventor
程风
于振洋
蔡程颖
徐国梁
蔡仁澜
万国伟
张晔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211521252.XA priority Critical patent/CN115876210A/en
Publication of CN115876210A publication Critical patent/CN115876210A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The present disclosure provides a map data generation method, device, electronic device and storage medium, which relate to the technical field of computers, in particular to the technical field of artificial intelligence such as intelligent transportation and automatic driving, and include: the method comprises the steps of responding to a set event occurring in a driving scene of a first vehicle, determining an occurring position area corresponding to the set event, acquiring first track information of the first vehicle in the occurring position area, and generating target map data corresponding to the occurring position area according to the first track information.

Description

Map data generation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for generating map data, an electronic device, and a storage medium.
Background
Artificial intelligence is the subject of research that makes computers simulate some human mental processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), both at the hardware level and at the software level. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge map technology and the like.
In the related art, a vehicle is usually located on the basis of an offline map of a driving area of the vehicle, but since environmental change events in a driving scene of the vehicle frequently occur, the vehicle is located on the basis of offline map data with a poor locating effect, in this case, the offline map data needs to be updated, but for the entire driving area of the vehicle, the environmental change events may only exist in a certain driving area, so that the period of updating the entire offline map is long, and a large amount of resources are wasted.
Disclosure of Invention
The disclosure provides a map data generation method, a vehicle pose determination device, an electronic device, a storage medium and a computer program product.
According to a first aspect of the present disclosure, there is provided a map data generation method, performed by a first vehicle, the method comprising: in response to a set event occurring in a driving scene of a first vehicle, determining an occurrence location area corresponding to the set event; acquiring first track information of a first vehicle in an occurrence position area; and generating target map data corresponding to the occurrence position area according to the first track information.
According to a second aspect of the present disclosure, there is provided a method of determining a vehicle pose, performed by a second vehicle, the method comprising: determining first track information and/or target map data, wherein the first track information describes track conditions of a first vehicle in an occurrence position area of a set event, the set event has occurred in a driving scene of the first vehicle, and the target map data is generated by the first vehicle according to the first track information; and determining the target pose of the second vehicle according to the first track information and/or the target map data.
According to a third aspect of the present disclosure, there is provided a map data generating apparatus including: the device comprises a first determining module, a second determining module and a control module, wherein the first determining module is used for responding to a set event in a driving scene of a first vehicle and determining an occurrence position area corresponding to the set event; the first acquisition module is used for acquiring first track information of a first vehicle in an occurrence position area; and the generating module is used for generating target map data corresponding to the occurrence position area according to the first track information.
According to a fourth aspect of the present disclosure, there is provided a vehicle pose determination apparatus, performed by a second vehicle, the method comprising: the second determination module is used for determining first track information and/or target map data, wherein the first track information describes track conditions of the first vehicle in an occurrence position area of a set event, the set event has occurred in a driving scene of the first vehicle, and the target map data is generated by the first vehicle according to the first track information; and the third determining module is used for determining the target pose of the second vehicle according to the first track information and/or the target map data.
According to a fifth aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a map data generation method according to the first aspect of the present disclosure or a method of determining a vehicle pose according to the second aspect of the present disclosure.
According to a sixth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to execute a map data generation method as the first aspect of the present disclosure or a determination method of a vehicle pose as the second aspect of the present disclosure.
According to a seventh aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the steps of a map data generation method as in the first aspect of the present disclosure, or performs the steps of a vehicle pose determination method as in the second aspect of the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a third embodiment according to the present disclosure
FIG. 4 is a schematic diagram according to a fourth embodiment of the present disclosure;
FIG. 5 is a schematic illustration according to a fifth embodiment of the present disclosure;
FIG. 6 is a schematic diagram according to a sixth embodiment of the present disclosure;
FIG. 7 is a schematic diagram according to a seventh embodiment of the present disclosure;
FIG. 8 is a schematic diagram according to an eighth embodiment of the present disclosure;
FIG. 9 is a schematic diagram according to a ninth embodiment of the present disclosure
FIG. 10 is a schematic diagram according to a tenth embodiment of the present disclosure;
FIG. 11 is a schematic diagram according to an eleventh embodiment of the present disclosure;
FIG. 12 shows a schematic block diagram of an example electronic device to implement a map data generation method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure.
It should be noted that the main execution body of the map data generation method of this embodiment is a map data generation apparatus, the apparatus may be implemented by software and/or hardware, the apparatus may be configured in an electronic device, and the electronic device may include, but is not limited to, a terminal, a server, and the like.
The embodiment of the disclosure relates to the technical field of artificial intelligence such as intelligent transportation and automatic driving.
Wherein, artificial Intelligence (Artificial Intelligence), english is abbreviated as AI. The method is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding human intelligence.
The intelligent transportation means that advanced information technology, data communication technology, sensor technology, electronic control technology, computer technology and the like are effectively and comprehensively applied to the whole transportation management system, so that a real-time, accurate and efficient comprehensive transportation and management system which can play a role in a large range and all around is established.
The automatic driving, also known as unmanned automobile, computer driving automobile and wheeled mobile robot, is an intelligent networked automobile which realizes unmanned driving through a computer system. The automatic driving automobile depends on the cooperation of artificial intelligence, visual calculation, radar, monitoring device and global positioning system, so that the computer can realize the cooperation of automobile and road under the action of nobody, and the automatic safe operation of the motor vehicle is realized.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
As shown in fig. 1, the map data generation method, which is executed by a first vehicle, includes:
the map data generation method described in the embodiment of the present disclosure may be executed by a first vehicle, and the first vehicle may be, for example, any vehicle in a vehicle driving scene, which is not limited to this.
S101: in response to occurrence of a set event in a driving scene of a first vehicle, an occurrence location area corresponding to the set event is determined.
The setting event may be, for example, an environment change event in a vehicle driving scene, and the environment change event may be, for example, an obstacle change event, a lane line change event, a road change event, or the like in the vehicle driving scene, which is not limited thereto.
However, the area corresponding to the set event in the driving scene of the vehicle may be referred to as an occurrence location area, and specific examples of the occurrence location area include an obstacle change link area and a lane line change link area in the driving scene of the vehicle, which is not limited thereto.
That is to say, in the embodiment of the present disclosure, it may be determined whether a set event occurs in a driving scene of a first vehicle, and when the set event occurs in the driving scene of the vehicle, the occurrence position of the set event is located, and a position area corresponding to the set event is used as the occurrence position area, which is not limited to this.
S102: first trajectory information of a first vehicle in an occurrence location area is acquired.
The first track information may be used to describe a track condition of the first vehicle in the occurrence position area, and the first track information may specifically be vehicle pose information corresponding to the first vehicle in the occurrence position area at different times, which is not limited to this.
In the embodiment of the disclosure, after a set event occurs in a driving scene in response to a first vehicle and an occurrence position area corresponding to the set event is determined, first track information of the first vehicle in the occurrence position area may be acquired, and then, a subsequent map data generation method may be triggered and executed based on the first track information.
In some embodiments, the acquiring of the first track information of the first vehicle in the occurrence position area may be based on a Global Positioning System (GPS) installed in the first vehicle, and when the first vehicle passes through the occurrence position area, the track of the first vehicle in the occurrence position area is located to acquire vehicle pose information of the first vehicle in the occurrence position area, and the acquired vehicle pose information of the first vehicle in the occurrence position area is used as the first track information, which is not limited.
In other embodiments, the first trajectory information of the first vehicle in the occurrence location area may be obtained, or sensor data of a scene where the first vehicle is located may be collected in real time during driving of the first vehicle, and the first vehicle is located based on the sensor data collected at different times, so as to obtain location information of the first vehicle in the occurrence location area at different times, and the location information of the first vehicle in the occurrence location area determined at different times is used as the first trajectory information together, which is not limited to this.
S103: and generating target map data corresponding to the occurrence position area according to the first track information.
The target map data corresponding to the occurrence location area may be, for example, a local map corresponding to the occurrence location area, which is not limited to this.
After the first track information of the first vehicle in the occurrence position area is acquired, the local map corresponding to the occurrence position area may be generated according to the first track information, and the generated local map corresponding to the occurrence position area may be used as the target map data corresponding to the occurrence position area, which is not limited in this regard.
It can be understood that, in the embodiment of the present disclosure, since a setting event (for example, an environment change event) may exist only in a certain driving area, when a local map corresponding to an occurrence location area is generated according to first trajectory information of a first vehicle in the occurrence location area, a period of map updating can be effectively reduced, and resource waste caused by updating the entire map in a driving scene of the vehicle can be effectively avoided.
Optionally, in the embodiment of the present disclosure, after first track information of a first vehicle in an occurrence position area is acquired, and target map data corresponding to the occurrence position area is generated according to the first track information, the first track information and/or the target map data may be sent to a second vehicle, so that the first vehicle may synchronize real-time updated target map data to the second vehicle in time, thereby ensuring freshness of the map data in the second vehicle, and further effectively assisting in improving safety of automatic driving of the second vehicle.
The second vehicle and the first vehicle are the same or different, and the method is not limited thereto.
In the embodiment of the disclosure, by responding to a set event occurring in a driving scene of a first vehicle, determining an occurrence position area corresponding to the set event, acquiring first track information of the first vehicle in the occurrence position area, and generating target map data corresponding to the occurrence position area according to the first track information, when the set event occurs in the driving scene of the first vehicle, the target map data corresponding to the occurrence position area is generated according to the first track information of the occurrence position area, so that a map updating cycle can be effectively shortened, and waste of resources caused by updating the whole map in the driving scene of the vehicle can be effectively avoided while the freshness of the map is ensured.
Fig. 2 is a schematic diagram according to a second embodiment of the present disclosure.
As shown in fig. 2, the map data generation method includes:
s201: the method comprises the steps of collecting a point cloud to be processed of a driving scene of a first vehicle.
The first vehicle is a point cloud collected in advance aiming at a driving scene of the first vehicle in the driving process, and the point cloud can be called as a point cloud to be processed.
That is to say, in the embodiment of the present disclosure, a corresponding camera component (for example, a LiDAR (LiDAR)) may be mounted in advance on a first vehicle, and then, in a driving process of the first vehicle, a point cloud of a driving scene of the first vehicle is collected as a point cloud to be processed based on the LiDAR mounted in advance in the first vehicle.
S202: a reference point cloud corresponding to a driving scene of a first vehicle is obtained.
The reference point cloud is the point cloud acquired in advance for the driving scene, and the reference point cloud may specifically be marking point cloud data acquired for the first driving scene of the vehicle before an environmental change event occurs in the driving scene of the vehicle, that is, before the first vehicle passes through an occurrence position area, corresponding point cloud is acquired in advance for the driving scene of the first vehicle, and the point cloud acquired in advance is used as the reference point cloud, and then the reference point cloud and the point cloud to be processed may be combined to determine whether an environmental change event occurs in the driving scene of the first vehicle, which may specifically refer to subsequent embodiments and is not described herein again.
S203: and determining whether a set event occurs in the driving scene of the first vehicle according to the point cloud to be processed and the reference point cloud.
According to the method and the device for processing the driving scene of the first vehicle, whether the set event occurs in the driving scene of the first vehicle or not can be determined according to the point cloud to be processed and the reference point cloud after the point cloud to be processed of the driving scene of the first vehicle is collected and the reference point cloud corresponding to the driving scene of the first vehicle is obtained.
The point cloud collected aiming at the driving scene of the first vehicle can be used for representing scene information in the driving scene of the first vehicle, so that when set time occurs in the driving scene of the first vehicle, the point cloud in the first driving scene changes correspondingly, whether a set event occurs in the driving scene of the first vehicle can be accurately identified according to the point cloud to be processed and the pre-acquired reference point cloud, and the set event in the driving scene of the first vehicle can be effectively identified.
In some embodiments, whether a set event occurs in the driving scene of the first vehicle is determined according to the point cloud to be processed and the reference point cloud, the point cloud to be processed and the reference point cloud may be compared, for example, a corresponding point cloud feature to be processed is extracted from the point cloud to be processed, a corresponding reference point cloud feature is extracted from the reference point cloud, and then the point cloud feature to be processed and the reference point cloud feature are compared, so that when the point cloud feature to be processed and the reference point cloud feature are different, the set event occurs in the driving scene of the first vehicle.
In other embodiments, whether a setting event occurs in the driving scene of the first vehicle is determined according to the point cloud to be processed and the point cloud of reference, the point cloud to be processed and the point cloud of reference may also be converted into the same spatial coordinate system, and when an environment change event does not occur in the driving scene of the first vehicle, in the same spatial coordinate system, the coordinates of the point cloud to be processed and the point cloud of reference should be the same.
S204: in response to occurrence of a set event in a driving scene of a first vehicle, an occurrence location area corresponding to the set event is determined.
S205: first trajectory information of a first vehicle in an occurrence location area is acquired.
S206: and generating target map data corresponding to the occurrence position area according to the first track information.
For the description of S204-S206, reference may be made to the above embodiments, and details are not repeated herein.
In the embodiment of the disclosure, a point cloud to be processed of a driving scene of a first vehicle is acquired, and a reference point cloud corresponding to the driving scene of the first vehicle is acquired, wherein the reference point cloud is acquired in advance for the driving scene, whether a set event occurs in the driving scene of the first vehicle is determined according to the point cloud to be processed and the reference point cloud, whether the set event occurs in the driving scene of the first vehicle can be accurately identified according to the point cloud to be processed and the reference point cloud acquired in advance, the set event in the driving scene of the first vehicle is effectively identified, an occurrence position area corresponding to the set event is determined in response to the set event occurring in the driving scene of the first vehicle, first track information of the first vehicle in the occurrence position area is acquired, target map data corresponding to the occurrence position area is generated according to the first track information of the occurrence position area, and accordingly, a period for updating a map can be effectively shortened, and waste of a vehicle resource in the driving scene can be effectively avoided.
Fig. 3 is a schematic diagram according to a third embodiment of the present disclosure.
As shown in fig. 3, the map data generation method includes:
s301: the method comprises the steps of collecting a point cloud to be processed of a driving scene of a first vehicle.
S302: the method comprises the steps of obtaining a reference point cloud corresponding to a driving scene of a first vehicle, wherein the reference point cloud is a point cloud acquired in advance for the driving scene.
For the description of S301 to S302, reference may be made to the above embodiments, which are not described herein again.
S303: a mapped location of the first vehicle is determined.
Wherein the mapping position represents a relative position of the local point cloud of the first vehicle in the point cloud to be processed.
That is to say, in the embodiment of the present disclosure, the point cloud to be processed is acquired for the driving scene of the first vehicle, so that the point cloud to be processed includes the local point cloud corresponding to the first vehicle, and a relative position of the local point cloud corresponding to the first vehicle in the point cloud to be processed may be referred to as a mapping position of the first vehicle.
In the embodiment of the present disclosure, the mapping position of the first vehicle may be determined by taking the center of the first vehicle as the center of the coordinate system, so as to construct a three-dimensional coordinate system, and the position coordinate of the local point cloud corresponding to the vehicle in the three-dimensional coordinate system is taken as the mapping position of the first vehicle, which is not limited in this respect.
S304: and rasterizing the point cloud to be processed to obtain rasterized point cloud to be processed, and rasterizing the reference point cloud to obtain reference rasterized point cloud.
In the embodiment of the disclosure, after the point cloud to be processed of the driving scene of the first vehicle is collected and the reference point cloud corresponding to the driving scene of the first vehicle is obtained, the point cloud to be processed may be subjected to rasterization processing to obtain the rasterized point cloud corresponding to the point cloud to be processed, the rasterized point cloud may be referred to as the rasterized point cloud to be processed, and the reference point cloud may be referred to as the reference rasterized point cloud, which is not limited thereto.
S305: and determining a depth value to be processed corresponding to the rasterized point cloud to be processed, and determining a reference depth value corresponding to the reference rasterized point cloud.
The depth value of the rasterized point cloud to be processed may be referred to as a depth value to be processed, and the depth value of the reference rasterized point cloud may be referred to as a reference depth value.
In the embodiment of the present disclosure, the rasterized point to be processed and the reference rasterized point cloud may be regarded as spatial coordinates in a three-dimensional coordinate system, so as to determine a depth value to be processed corresponding to the rasterized point cloud to be processed, and determine a reference depth value corresponding to the rasterized point cloud to be referred to.
S306: and determining whether a set event occurs in the driving scene of the first vehicle according to the depth value to be processed, the reference depth value and the relative position.
The method and the device for determining the depth value to be processed corresponding to the rasterized point cloud to be processed determine whether the set event occurs in the driving scene of the first vehicle according to the depth value to be processed, the reference depth value and the relative position after determining the depth value to be processed corresponding to the rasterized point cloud to be processed and the reference depth value corresponding to the reference rasterized point cloud, thereby extracting the depth value to be processed and the reference depth value which can be specifically quantized from the rasterized point cloud to be processed and the reference rasterized point cloud, and determining whether the set event occurs in the driving scene of the first vehicle based on the depth value to be processed, the reference depth value and the relative position more simply and conveniently.
Optionally, in some embodiments, whether a set event occurs in the driving scene of the first vehicle is determined according to the depth value to be processed, the reference depth value, and the relative position, which may be determining, according to the depth value to be processed and the reference depth value, at least one target rasterized point cloud from the rasterized point cloud to be processed, performing clustering on the at least one target rasterized point cloud to obtain a point cloud cluster to be processed, determining, according to the relative position and the point cloud cluster to be processed, whether the set event occurs in the driving scene of the first vehicle, so as to accurately determine, according to the relative position of the first vehicle in the point cloud to be processed and the point cloud cluster to be processed, whether the set event occurs in the driving scene of the first vehicle, and effectively improve the accuracy of determining the set event.
The target rasterized point cloud can be used for representing an object corresponding to a set event or an environment change event in a first vehicle driving scene.
In some embodiments, at least one target rasterized point cloud is determined from the rasterized point cloud to be processed according to the depth value to be processed and the reference depth value, where the depth value to be processed and the reference depth value are compared, and when the depth value to be processed is different from the reference depth value, the rasterized point cloud to be processed corresponding to the depth value to be processed is used as the target rasterized point cloud, which is not limited herein.
Optionally, in some embodiments, at least one target rasterized point cloud is determined from the rasterized point cloud to be processed according to the depth value to be processed and the reference depth value, which may be determining a depth difference between the depth value to be processed and the reference depth value, and if the depth difference is smaller than a difference threshold, determining the rasterized point cloud to be processed corresponding to the depth value to be processed as the target rasterized point cloud, thereby accurately determining an environment corresponding to the setting event or a target rasterized point cloud to characterize the object change event from the rasterized point clouds to be processed, and effectively reducing the amount of point cloud data to be processed in the setting time determination process while ensuring the subsequent setting time determination effect.
That is to say, in the embodiment of the present disclosure, a difference between a to-be-processed depth value and a reference depth value may be determined as a depth difference, the depth difference is compared with a predetermined difference threshold (where the difference threshold may be combined with a map data generation requirement in an actual service scene, and adaptive configuration is performed, without limitation), and when the depth difference is smaller than the difference threshold, the to-be-processed depth point cloud corresponding to the to-be-processed depth value is determined as the target rasterized point cloud.
It can be understood that, for a setting event (for example, an environment change event, which is not limited to this) in the first vehicle driving scene, an environment change event that affects subsequent driving of a subsequent vehicle may be embodied as a lane line change, an obstacle change, and the like in the vehicle driving scene, but other environment change events that do not affect driving of the vehicle may also exist in the vehicle driving scene, for example, a speed bump, a micro obstacle, and the like that do not affect driving of the vehicle exist, and in this case, when processing the to-be-processed rasterized point cloud in the vehicle driving scene, there may be a great waste of resources.
Therefore, after determining at least one target rasterized point cloud from the rasterized point clouds to be processed, the embodiments of the present disclosure may perform clustering processing on the at least one target rasterized point cloud to obtain a point cloud cluster distributed in a more aggregated state, where the point cloud cluster may be referred to as a to-be-processed point cloud cluster, and is not limited thereto.
After clustering processing is carried out on at least one target rasterized point cloud to obtain a point cloud cluster to be processed, whether a set event occurs in a driving scene of a first vehicle or not can be determined according to the relative position and the point cloud cluster to be processed.
Optionally, in some embodiments, determining whether a set event occurs in the driving scene of the first vehicle according to the relative position and the point cloud cluster to be processed may be to acquire a point cloud density and a cluster size of the point cloud cluster to be processed, and determine a reference position of the point cloud cluster to be processed in the point cloud to be processed, where the reference position represents the relative position of the point cloud cluster to be processed in the point cloud to be processed, determine a position distance between the mapping position and the reference position, and determine whether the set event occurs in the driving scene of the first vehicle according to the point cloud density, the cluster size, and the position distance.
The point cloud cluster is composed of a plurality of rasterized point clouds, so that a value used for quantitatively describing the rasterized point clouds in the point cloud cluster can be called as a point cloud density, and correspondingly, a value used for quantitatively describing the size of the point cloud cluster can be called as a cluster size.
In the embodiment of the disclosure, the point cloud to be processed is acquired according to the driving scene of the first vehicle, so that the point cloud to be processed includes the local point cloud corresponding to the point cloud cluster to be processed, and the relative position of the local point cloud corresponding to the point cloud cluster to be processed in the point cloud to be processed may be referred to as the reference position of the point cloud cluster to be processed.
After the mapping position of the first vehicle is determined and the reference position of the point cloud cluster to be processed in the point cloud to be processed is determined, the position distance between the mapping position and the reference position can be determined, and then whether a set event occurs in the driving scene of the first vehicle can be determined according to the point cloud density, the cluster size and the position distance.
Optionally, in some embodiments, determining whether a set event occurs in the driving scene of the first vehicle according to the point cloud density, the cluster size, and the location distance may be that the point cloud density is less than or equal to a density threshold, the cluster size is greater than a size threshold, and the location distance is less than or equal to a distance threshold, determining that the set event occurs in the driving scene of the first vehicle, and determining that the set event does not occur in the driving scene of the first vehicle when the point cloud density is greater than the density threshold, or the cluster size is less than or equal to the size threshold, or the location distance is greater than the distance threshold.
That is to say, after the point cloud density, the cluster size, and the location distance are determined, the point cloud density, the cluster size, and the location distance may be compared with a predetermined density threshold, a predetermined size threshold, and a predetermined distance threshold, and when the point cloud density is less than or equal to the density threshold, the cluster size is greater than the size threshold, and the location distance is less than or equal to the distance threshold, it is determined that a set event occurs in the driving scene of the first vehicle, and when the point cloud density is greater than the density threshold, or the cluster size is less than or equal to the size threshold, or the location distance is greater than the distance threshold, it is determined that the set event does not occur in the driving scene of the first vehicle, so that it may be accurately determined whether the set event occurs in the driving scene of the first vehicle based on specifically quantized values of the point cloud density, the cluster size, the predetermined location distance, the predetermined distance threshold, the predetermined time determination operability is effectively improved.
S307: in response to occurrence of a set event in a driving scene of a first vehicle, an occurrence location area corresponding to the set event is determined.
S308: first trajectory information of a first vehicle in an occurrence location area is acquired.
S309: and generating target map data corresponding to the occurrence position area according to the first track information.
For the description of S307 to S309, reference may be made to the above embodiments, which are not described herein again.
In the embodiment of the disclosure, by acquiring a point cloud to be processed of a driving scene of a first vehicle and acquiring a reference point cloud corresponding to the driving scene of the first vehicle, wherein the reference point cloud is acquired in advance for the driving scene, determining a depth value to be processed corresponding to a rasterized point cloud to be processed, and determining a reference depth value corresponding to the rasterized point cloud to be referred, it is possible to determine whether a set event occurs in the driving scene of the first vehicle according to the depth value to be processed, the reference depth value, and a relative position, thereby achieving extraction of the depth value to be processed and the reference depth value from the rasterized point cloud to be processed and the rasterized point cloud to be referred, which are specifically quantifiable, so that it is possible to more conveniently determine whether the set event occurs in the driving scene of the first vehicle based on the depth value to be processed, the reference depth value, and the relative position, determine an occurrence position area corresponding to the set event in response to the occurrence of the set event in the driving scene of the first vehicle, acquire first track information of the first vehicle in the occurrence position area, generate map data corresponding to the occurrence of the target map, and effectively avoid wasting the occurrence of the target map data in the occurrence of the target map when the map occurs in the target map.
Fig. 4 is a schematic diagram according to a fourth embodiment of the present disclosure.
As shown in fig. 4, the map data generation method includes:
s401: in response to occurrence of a set event in a driving scene of a first vehicle, an occurrence location area corresponding to the set event is determined.
S402: first trajectory information of a first vehicle in an occurrence location area is acquired.
For the description of S401 to S402, reference may be made to the above embodiments, which are not described herein again.
S403: sensing parameter information of the pose detection sensor is determined.
In an embodiment of the present disclosure, a first vehicle includes: the pose detection sensor may be, for example, an Inertial Measurement Unit (IMU), and the first trajectory information is obtained based on detection by the pose detection sensor, which is not limited thereto.
The pose detection sensor may have corresponding parameter information, which may be referred to as sensing parameter information, and the sensing parameter information may be, for example, external parameters of the pose detection sensor, which is not limited to this.
That is to say, in the embodiment of the present disclosure, the external sensor of the pose detection sensor of the first vehicle may be determined, and the external sensor determined in the foregoing may be used as the sensing parameter information, and then, the subsequent map data generation method may be triggered and executed based on the external sensor of the sensor.
S404: at least one point data in the occurrence location area and/or an area image of the occurrence location area is acquired.
The point data may be, for example, point cloud data corresponding to the occurrence location area, and the area image may be, for example, an image corresponding to the occurrence location area, which is not limited thereto.
In an embodiment of the present disclosure, the first vehicle further includes: the camera module, which may be, for example, a laser radar, is not limited thereto, and the at least one point data is acquired based on the camera module.
In an embodiment of the present disclosure, the first vehicle further includes: the image capturing component may be, for example, a camera, and the region image is acquired for the occurrence position region based on the image capturing component, which is not limited to this.
That is to say, in the embodiment of the present disclosure, when the first vehicle passes through the occurrence position area, the point cloud of the occurrence position area may be collected as point data based on the camera component of the first vehicle, and the image of the occurrence position area may be collected as an area image based on the camera component of the first vehicle, which is not limited to this.
S405: and generating a target point cloud corresponding to the occurrence position area according to the first track information, the sensing parameter information and the at least one point data.
The target point cloud can be used for representing the whole point cloud information of the occurrence position area, and the target point cloud can be understood as a point cloud map of the occurrence position area without limitation.
In the embodiment of the present disclosure, after obtaining at least one point data in the occurrence location area, a target point cloud corresponding to the occurrence location area may be generated according to the first trajectory information, the sensing parameter information, and the at least one point data.
Optionally, in some embodiments, the target point cloud corresponding to the occurrence position area is generated according to the first track information, the sensing parameter information and the at least one point data, which may be by determining equipment parameter information of the point cloud acquisition equipment, and generating the target point cloud corresponding to the occurrence position area according to the first track information, the sensing parameter information, the equipment parameter information and the at least one point data, so that the generated target point cloud corresponding to the occurrence position area can accurately represent set event information of the occurrence position area, the generated target point cloud can be accurately adapted to the occurrence position area, and the generation effect of the target point cloud is effectively ensured.
The point cloud acquiring device may have corresponding parameter information, which may be referred to as device parameter information, and the device parameter information may be, for example, a device parameter (e.g., radar parameter) of the point cloud acquiring device, which is not limited thereto.
That is, in the embodiment of the present disclosure, a target point cloud corresponding to the occurrence location area may be generated according to the first trajectory information, the sensing parameter information, the device parameter information, and the at least one point data, that is, distance image projection may be performed according to the sensing parameter information and the device parameter information on vertical and horizontal angles of the point data to obtain an image with a length = a wire harness of the point cloud acquisition device and a width =360 degrees/horizontal resolution of the point cloud acquisition device, then ground point data with a vertical angle lower than a certain threshold is removed, then point cloud segmentation processing is performed to obtain a segmented point cloud, and feature extraction is performed on the segmented point cloud to calculate a curvature before each adjacent point to extract a line feature and a plane feature, and a point cloud Label sensing a Label is used to filter out a blocking area and noise point information, and then the extracted point cloud features are projected into a world point cloud coordinate system according to the first trajectory information to obtain the target point cloud, which is not limited.
S406: and generating a target map corresponding to the occurrence position area according to the first track information, the sensing parameter information and the area image.
The target map may be used to represent the overall visual information of the occurrence location area, and the target map may be understood as a visual map of the occurrence location area, which is not limited to this.
In the embodiment of the present disclosure, after acquiring the area image in the occurrence position area, the target map corresponding to the occurrence position area may be generated from the first trajectory information, the sensing parameter information, and the area image.
Optionally, in some embodiments, the target map corresponding to the occurrence position area is generated according to the first trajectory information, the sensing parameter information, and the area image, and may be image capturing parameter information of the image capturing assembly is determined, and the target map corresponding to the occurrence position area is generated according to the first trajectory information, the sensing parameter information, the image capturing parameter information, and the area image, so that the generated target map corresponding to the occurrence position area can accurately represent visual information of a set event of the occurrence position area, and the generated target map can be accurately adapted to the occurrence position area, thereby effectively guaranteeing a generation effect of the target map.
The image capturing component may have corresponding parameter information, which may be referred to as image capturing parameter information, and the image capturing parameter information may be, for example, an external parameter (e.g., an external parameter of a camera) of the image capturing component, which is not limited thereto.
That is, in the embodiment of the present disclosure, a target map corresponding to the occurrence position area may be generated based on the first trajectory information, the sensing parameter information, the imaging parameter information, and the area image.
Specifically, the area image and the first track information may be aligned to obtain paired track information and image data, the first track information is added to the paired area image and the blocked point cloud is filtered to obtain a target projection image, and then the target projection image may be input to the pre-trained visual map generation model to obtain the target map output by the pre-trained visual map generation model, which is not limited to this.
S407: the target point cloud and/or the target map are/is used together as target map data.
In the embodiment of the disclosure, after the target point cloud corresponding to the occurrence position area is generated according to the first track information, the sensing parameter information and the at least one point data, and the target map corresponding to the occurrence position area is generated according to the first track information, the sensing parameter information and the area image, the target point cloud and/or the target map can be jointly used as the target map data.
In the embodiment of the disclosure, by responding to a set event occurring in a driving scene of a first vehicle, determining an occurrence position area corresponding to the set event, acquiring first track information of the first vehicle in the occurrence position area, then determining sensing parameter information of a pose detection sensor, acquiring at least one point data in the occurrence position area and/or an area image of the occurrence position area, and then generating a target point cloud corresponding to the occurrence position area according to the first track information, the sensing parameter information and the at least one point data; and/or generating a target map corresponding to the occurrence position area according to the first track information, the sensing parameter information and the area image, and then using the target point cloud and/or the target map as the target map data together, wherein the target point cloud data is generated by combining point data of the occurrence position area, and the target map is generated by combining an area image of the occurrence position area, so that the generated target point cloud and the generated target map can describe the occurrence position area based on two dimensions of a point cloud dimension and a visual dimension, and thus, when the target point cloud and the target map are used as the target map data together, the referential property of the target map data can be effectively improved, and the generation effect of the map data can be effectively improved.
Fig. 5 is a schematic diagram according to a fifth embodiment of the present disclosure.
As shown in fig. 5, the vehicle pose determination method, performed by a second vehicle, includes:
the method for determining the vehicle pose described in the embodiments of the present disclosure may be performed by a second vehicle, and the first vehicle may be, for example, any vehicle in a vehicle driving scene, which is not limited to this.
S501: first trajectory information and/or target map data is determined.
The same meanings and descriptions of the terms in this embodiment as in the above embodiment can be specifically referred to the above embodiment, and are not repeated herein.
The first track information describes track conditions of the first vehicle in an occurrence position area of a set event, the set event has occurred in a driving scene of the first vehicle, and the target map data is generated by the first vehicle according to the first track information.
Optionally, in some embodiments, the determining of the first track information and/or the target map data may be receiving the first track information and/or the target map data sent by the first vehicle, so that timely synchronization of the first track information and/or the target map data generated by the first vehicle into the second vehicle may be achieved to timely update the map data in the second vehicle, thereby effectively guaranteeing freshness of the map data in the second vehicle.
S502: and determining the target pose of the second vehicle according to the first track information and/or the target map data.
After the first track information and/or the target map data are determined, the second vehicle may be located according to the first track information and/or the target map data to obtain the position and the posture of the second vehicle at the current time, and the determined position and posture of the second vehicle are used as the target posture of the second vehicle, which is not limited in this regard.
In some embodiments, the determining the target pose of the second vehicle according to the first track information and/or the target map data may be, without limitation, determining whether the second vehicle passes through the occurrence location area in combination with the first track information, and locating the second vehicle in combination with the target map data of the occurrence location area when it is determined that the second vehicle passes through the occurrence location area, so as to determine the target pose of the second vehicle.
In the embodiment of the disclosure, by determining first track information and/or target map data, where the first track information describes a track situation of a first vehicle in an occurrence position area of a set event, the set event has occurred in a driving scene of the first vehicle, and the target map data is generated by the first vehicle according to the first track information, and a target pose of a second vehicle is determined according to the first track information and/or the target map data, thereby positioning the second vehicle based on the target map data, which is generated by the first vehicle and is adapted to the current driving scene of the vehicle, so that the target pose of the second vehicle determined and obtained can effectively meet a vehicle positioning requirement in the current driving scene of the vehicle, and the determination effect of the vehicle pose is effectively improved.
Fig. 6 is a schematic diagram according to a sixth embodiment of the present disclosure.
As shown in fig. 6, the map data generation method includes:
s601: first trajectory information and/or target map data is determined.
For the description of S601, reference may be made to the above embodiments, which are not described herein again.
S602: second trajectory information of a second vehicle is acquired.
The second track information may be used to describe a track condition of the second vehicle in the occurrence position area, and the second track information may be, for example, vehicle pose information corresponding to the second vehicle in the occurrence position area at different times, which is not limited to this.
In the embodiment of the present disclosure, after a set event occurs in a driving scene in response to a second vehicle and an occurrence location area corresponding to the set event is determined, second track information of the second vehicle in the occurrence location area may be obtained, and then, a subsequent map data generation method may be triggered and executed based on the second track information.
In some embodiments, the second track information of the second vehicle in the occurrence location area may be obtained by Positioning a track of the second vehicle in the occurrence location area based on a Global Positioning System (GPS) installed in the second vehicle when the second vehicle passes through the occurrence location area, so as to obtain the vehicle pose information of the second vehicle in the occurrence location area, and taking the obtained vehicle pose information of the second vehicle in the occurrence location area as the second track information, which is not limited in this respect.
In other embodiments, the second trajectory information of the second vehicle in the occurrence location area may be obtained, or sensor data of a scene where the second vehicle is located may be collected in real time during driving of the second vehicle, and the second vehicle is located based on the sensor data collected at different times, so as to obtain location information of the second vehicle in the occurrence location area at different times, and the location information of the second vehicle in the occurrence location area determined at different times is used as the second trajectory information together, which is not limited to this.
S603: and determining whether the second vehicle passes through the occurrence position area according to the first track information and the second track information.
After the first track information is acquired and the second track information of the second vehicle is acquired, whether the second vehicle passes through the occurrence position area or not can be determined according to the first track information and the second track information.
In some embodiments, determining whether the second vehicle passes through the occurrence location area based on the first track information and the second track information may be comparing the first track information and the second track information to determine a deviation value between the first track information and the second track information, comparing the deviation value with a predetermined deviation value threshold, and determining whether the second vehicle passes through the occurrence location area when the deviation value is greater than the deviation value threshold.
In other embodiments, the method may further include determining whether the second vehicle passes through the occurrence location area according to the first trajectory information and the second trajectory information by using a joint pre-trained deep learning model, that is, inputting the first trajectory information and the second trajectory information into the pre-trained deep learning model, and determining whether the second vehicle passes through the occurrence location area according to an output of the pre-trained deep learning model, which is not limited herein.
S604: and if the second vehicle passes through the occurrence position area, positioning the second vehicle according to the second track information and the target map data to obtain a target pose.
In the embodiment of the disclosure, if the second vehicle passes through the occurrence position area, the second vehicle can be positioned according to the second track information and the target map data to obtain the target pose.
That is to say, in the embodiment of the present disclosure, when it is determined that the second vehicle passes through the occurrence location area, the target map data of the occurrence location area and the second track information of the second vehicle in the occurrence location area may be combined to locate the second vehicle, so as to determine the target pose of the second vehicle, which is not limited thereto.
In the embodiment of the disclosure, by determining the first track information and/or the target map data, acquiring the second track information of the second vehicle, determining whether the second vehicle passes through the occurrence position area according to the first track information and the second track information, and positioning the second vehicle according to the second track information and the target map data when the second vehicle passes through the occurrence position area to obtain the target pose, whether the second vehicle passes through the occurrence position area can be accurately determined based on the first track information and the second track information, and when the second vehicle passes through the occurrence position area, the second vehicle is positioned according to the second track information and the target map data of the occurrence pose area, so that the target pose of the second vehicle in the occurrence position area can be accurately determined.
Fig. 7 is a schematic diagram according to a seventh embodiment of the present disclosure.
As shown in fig. 7, the map data generation method includes:
s701: first trajectory information and/or target map data is determined.
S702: second trajectory information of a second vehicle is acquired.
S703: and determining whether the second vehicle passes through the occurrence position area according to the first track information and the second track information.
For description of S701-S703, reference may be made to the above embodiments, which are not described herein again.
S704: and positioning the second vehicle according to the target point cloud and the second track information to obtain a first pose of the second vehicle.
According to the embodiment of the disclosure, after whether the second vehicle passes through the occurrence position area is determined according to the first track information and the second track information, the second vehicle can be positioned according to the target point cloud and the second track information to obtain the current pose of the second vehicle, and the pose of the second vehicle determined based on the target point cloud and the second track information can be called as the first pose.
Optionally, in some embodiments, the second vehicle is positioned according to the target point cloud and the second track information to obtain the first pose of the second vehicle, where the point cloud line feature is used to describe local line information in the target point cloud, and the point cloud surface feature is used to describe local area information in the target point cloud, and the first pose is determined according to the position coordinates, the point cloud line feature and the point cloud surface feature, because the corresponding point cloud line feature and point cloud surface feature are extracted from the target point cloud, and the extracted line feature and surface feature can be used to describe information in the target point cloud, and at the same time, the feature data amount of subsequent processing can be reduced to a certain extent, so that the first pose determination effect is ensured, and at the same time, the data amount of subsequent feature processing can be reduced to a certain extent.
The point cloud line features are used for describing local line information in the target point cloud, and the point cloud surface features are used for describing local area information in the target point cloud.
In the embodiment of the disclosure, all line features and surface features (in a world coordinate system) in the target point cloud may be used, if the number of features does not satisfy a threshold, no output is performed, if the number of features satisfies the threshold, point cloud segmentation and feature extraction are performed to obtain corresponding point cloud line features and point cloud surface features, an occlusion region and a noise point are filtered according to a Label of a perception point cloud to obtain point cloud line features and point cloud surface features, and then the point cloud line features and the point cloud surface features are converted into a W coordinate system (in the world coordinate system) by using a predicted pose of a vehicle without limitation.
In the embodiment of the disclosure, after the point cloud line features and the point cloud surface features are identified and obtained from the target point cloud, the first pose is determined according to the position coordinates, the point cloud line features and the point cloud surface features.
Optionally, in some embodiments, the determining the first pose according to the position coordinates, the point cloud line features and the point cloud surface features may be determining a first distance between the position coordinates and the point cloud line features, determining a second distance between the position coordinates and the point cloud surface features, and performing a least squares processing on the first distance and the second distance to obtain the first pose, so as to determine the first pose of the second vehicle in the occurring position area accurately by combining the first distance between the position coordinates and the point cloud line features and determining the second distance between the position coordinates and the point cloud surface features.
Specifically, the distance between the position coordinate k and the point cloud line feature may be determined as the first distance, that is, the distance d between the k distance and the uv straight line feature (i.e., the point cloud line feature) may be obtained by finding the minimum neighboring point u in the target point cloud according to the position coordinate k, finding the v closest to u, and finding the v closest to u ek The determination process is as follows:
Figure BDA0003973924510000121
wherein the content of the first and second substances,
Figure BDA0003973924510000122
is the coordinate of the position coordinate k, is combined>
Figure BDA0003973924510000123
The coordinates of the u characteristic point in the target point cloud are analyzed>
Figure BDA0003973924510000124
For v characteristic points in target point cloudThe coordinates of (a).
Specifically, the distance between the position coordinate k and the point cloud surface feature may be determined as the second distance, that is, the minimum neighboring feature point u is found in the target point cloud according to the position coordinate k, then the v and w plane feature points are found in the neighboring feature points of u to form a plane feature (that is, the point cloud surface feature), and the distance d from the position coordinate k to the plane is obtained pk The determination is as follows:
Figure BDA0003973924510000125
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003973924510000126
is the coordinate of the position coordinate k, is>
Figure BDA0003973924510000127
Is the coordinate of the minimum neighbor plane feature point u in the target point cloud, and>
Figure BDA0003973924510000128
for the coordinates of the v-feature points in the target point cloud, </R>
Figure BDA0003973924510000129
And the coordinates of the w characteristic points in the target point cloud are obtained.
In the embodiment of the present disclosure, a first distance between the position coordinate and the point cloud line feature is determined, a second distance between the position coordinate and the point cloud surface feature is determined, and the first distance and the second distance are subjected to least squares processing to obtain the first pose, and the determining process is as follows:
Figure BDA00039739245100001210
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA00039739245100001211
representing the sum of the distance errors of the current point feature and the corresponding point feature in the target point cloud,
Figure BDA00039739245100001212
and representing the distance error sum of the current point feature and the corresponding point feature in the target point cloud.
S705: and positioning the second vehicle according to the target map and the second track information to acquire a second pose of the second vehicle.
According to the embodiment of the disclosure, after determining whether the second vehicle passes through the occurrence position area according to the first track information and the second track information, the second vehicle can be positioned according to the target map and the second track information to obtain the current pose of the second vehicle, and the pose of the second vehicle determined based on the target map and the second track information can be referred to as the second pose.
Optionally, in some embodiments, the second vehicle is positioned according to the target map and the second track information to obtain the second pose of the second vehicle, and the target map and the second track information are input into the visual positioning model together to obtain the second pose output by the visual positioning model, and the second pose of the second vehicle is obtained by positioning the second vehicle according to the target map and the second track information by using the combined visual positioning model, so that the determination logic of the second pose can be effectively simplified, the operability of the second pose determination process is effectively improved, and the determination efficiency of the second pose is effectively improved.
That is, in the embodiment of the present disclosure, after obtaining the target map of the occurrence location area and the second track information, the target map and the second track information may be input into the visual positioning model together to obtain the second posture output by the visual positioning model, which is not limited to this.
S706: and determining the pose of the target according to the first pose and/or the second pose.
In the embodiment of the present disclosure, after the first posture and/or the second posture of the second vehicle is obtained, the target posture may be determined according to the first posture and/or the second posture.
Specifically, the first pose and/or the second pose may be subjected to fusion processing, and the pose obtained by the fusion may be used as the target pose, that is, the first pose and/or the second pose may be input to the pose fusion filter, the first pose and/or the second pose is subjected to fusion processing by the pose fusion filter, and a corresponding target pose is output, which is not limited.
In the embodiment of the disclosure, when the target pose is determined, the first pose and the second pose can be fused effectively to generate a more accurate target pose, and the target pose determination effect is improved effectively.
Fig. 8 is a schematic diagram according to an eighth embodiment of the present disclosure.
As shown in fig. 8, the map data generation device 80 includes:
a first determining module 801, configured to determine, in response to a set event occurring in a driving scene of the first vehicle, an occurrence location area corresponding to the set event;
a first obtaining module 802, configured to obtain first trajectory information of the first vehicle in the occurrence location area; and
a generating module 803, configured to generate, according to the first trajectory information, target map data corresponding to the occurrence location area.
In some embodiments of the present disclosure, as shown in fig. 9, fig. 9 is a schematic diagram according to a ninth embodiment of the present disclosure, and the map data generating apparatus 90 includes: the first determining module 901, the first obtaining module 902, and the generating module 903, where the map data generating apparatus 90 further includes:
a sending module 904, configured to send the first trajectory information and/or the target map data to a second vehicle.
In some embodiments of the present disclosure, the map data generation apparatus 90 further includes:
the first acquisition module 905 is used for acquiring a point cloud to be processed of a driving scene of the first vehicle;
a second obtaining module 906, configured to obtain a reference point cloud corresponding to a driving scene of the first vehicle, where the reference point cloud is a point cloud acquired in advance for the driving scene;
a second determining module 907, configured to determine whether the set event occurs in the driving scene of the first vehicle according to the point cloud to be processed and the reference point cloud.
In some embodiments of the present disclosure, the second determining module 907 further includes:
a first determining submodule 9071, configured to determine a mapping position of the first vehicle, where the mapping position represents a relative position of a local point cloud of the first vehicle in the point cloud to be processed;
a processing sub-module 9072, configured to perform rasterization on the point cloud to be processed to obtain a rasterized point cloud to be processed, and perform rasterization on the reference point cloud to obtain a reference rasterized point cloud;
a second determining submodule 9073, configured to determine a to-be-processed depth value corresponding to the to-be-processed rasterized point cloud, and determine a reference depth value corresponding to the reference rasterized point cloud; and
a third determining sub-module 9074 is configured to determine whether the set event occurs in the driving scene of the first vehicle according to the depth value to be processed, the reference depth value, and the relative position.
In some embodiments of the present disclosure, the first determining submodule 9074 is further configured to:
determining at least one target rasterized point cloud from the rasterized point cloud to be processed according to the depth value to be processed and the reference depth value;
clustering the at least one target rasterized point cloud to obtain a point cloud cluster to be processed;
and determining whether the set event occurs in the driving scene of the first vehicle according to the relative position and the point cloud cluster to be processed.
In some embodiments of the present disclosure, the first determining submodule 9074 is further configured to:
determining a depth difference value between the depth value to be processed and the reference depth value;
and if the depth difference value is smaller than a difference threshold value, determining the rasterized point cloud to be processed corresponding to the depth value to be processed as the target rasterized point cloud.
In some embodiments of the present disclosure, the first determining submodule 9074 is further configured to:
acquiring the point cloud density and the cluster size of the point cloud cluster to be processed;
determining a reference position of the point cloud cluster to be processed in the point cloud to be processed, wherein the reference position represents the relative position of the point cloud cluster to be processed in the point cloud to be processed;
determining a location distance between the mapped location and the reference location;
and determining whether the set event occurs in the driving scene of the first vehicle according to the point cloud density, the cluster size and the position distance.
In some embodiments of the present disclosure, the first determining submodule 9074 is further configured to:
if the point cloud density is less than or equal to a density threshold, the cluster size is greater than a size threshold, and the location distance is less than or equal to a distance threshold, determining that the set event occurred in the driving scene of the first vehicle;
if the point cloud density is greater than the density threshold, or the cluster size is less than or equal to the size threshold, or the location distance is greater than the distance threshold, it is determined that the set event does not occur in the driving scene of the first vehicle.
In some embodiments of the present disclosure, the first vehicle comprises: a pose detection sensor, the first trajectory information being based on the pose detection sensor detection;
wherein the generating module 903 is further configured to:
determining sensing parameter information of the pose detection sensor;
acquiring at least one point data in the occurrence position area and/or an area image of the occurrence position area;
generating a target point cloud corresponding to the occurrence position area according to the first track information, the sensing parameter information and the at least one point data; and/or
Generating a target map corresponding to the occurrence position area according to the first track information, the sensing parameter information and the area image;
the target point cloud and/or the target map are/is used together as the target map data.
In some embodiments of the present disclosure, the first vehicle further comprises: the at least one point data is acquired based on the camera shooting component;
the generating module 903 is further configured to:
determining equipment parameter information of the camera shooting assembly;
and generating a target point cloud corresponding to the occurrence position area according to the first track information, the sensing parameter information, the equipment parameter information and the at least one point data.
In some embodiments of the present disclosure, the first vehicle further comprises: the image pickup assembly is used for acquiring the regional image based on the image pickup assembly;
wherein the generating module 903 is further configured to:
determining the shooting parameter information of the shooting component;
and generating a target map corresponding to the occurrence position area according to the first track information, the sensing parameter information, the shooting parameter information and the area image.
It is understood that the map data generating device 90 in fig. 9 of the present embodiment and the map data generating device 80 in the foregoing embodiment, the first determining module 901 and the first determining module 801 in the foregoing embodiment, the first acquiring module 902 and the first acquiring module 802 in the foregoing embodiment, and the generating module 903 and the generating module 803 in the foregoing embodiment may have the same functions and structures.
The explanation of the map data generation method described above is also applicable to the map data generation device of the present embodiment.
In this embodiment, by determining an occurrence location area corresponding to a set event in response to the occurrence of the set event in the driving scene of the first vehicle, acquiring first trajectory information of the first vehicle in the occurrence location area, and generating target map data corresponding to the occurrence location area according to the first trajectory information, when the set event occurs in the driving scene of the first vehicle, the target map data corresponding to the occurrence location area is generated according to the first trajectory information of the occurrence location area, so that a map updating cycle can be effectively reduced, and waste of resources caused by updating the entire map in the driving scene of the vehicle can be effectively avoided while map freshness is ensured.
Fig. 10 is a schematic diagram according to a tenth embodiment of the present disclosure.
As shown in fig. 10, the vehicle pose determination apparatus 100 includes:
a third determining module 1001, configured to determine first trajectory information and/or target map data, where the first trajectory information describes a trajectory situation of a first vehicle in an occurrence location area of a set event, the set event having occurred in a driving scene of the first vehicle, and the target map data is generated by the first vehicle according to the first trajectory information;
a fourth determining module 1002, configured to determine a target pose of the second vehicle according to the first trajectory information and/or the target map data.
In some embodiments of the present disclosure, as shown in fig. 11, fig. 11 is a schematic diagram according to an eleventh embodiment of the present disclosure, and the vehicle pose determination apparatus 110 includes: a third determining module 1101 and a fourth determining module 1102, where the third determining module 1101 is further configured to:
and receiving the first track information and/or the target map data sent by the first vehicle.
In some embodiments of the present disclosure, the fourth determining module 1102 further includes:
an obtaining submodule 11021 configured to obtain second trajectory information of the second vehicle;
a fourth determination submodule 11022 configured to determine whether the second vehicle passes through the occurrence position area, according to the first trajectory information and the second trajectory information;
and the positioning sub-module 11023 is configured to position the second vehicle according to the second track information and the target map data when the second vehicle passes through the occurrence position area, so as to obtain the target pose.
In some embodiments of the present disclosure, the target map data includes: a target point cloud and/or a target map;
wherein the positioning sub-module 11023 is further configured to:
positioning the second vehicle according to the target point cloud and the second track information to obtain a first pose of the second vehicle; and/or
Positioning the second vehicle according to the target map and the second track information to acquire a second pose of the second vehicle;
and determining the target pose according to the first pose and/or the second pose.
In some embodiments of the present disclosure, the second trajectory information includes: at least one position coordinate of a second vehicle in a world coordinate system;
wherein the positioning sub-module 11023 is further configured to:
identifying point cloud line features and point cloud surface features from the target point cloud, wherein the point cloud line features are used for describing local line information in the target point cloud, and the point cloud surface features are used for describing local area information in the target point cloud;
and determining the first pose according to the position coordinates, the point cloud line characteristics and the point cloud surface characteristics.
In some embodiments of the present disclosure, the positioning sub-module 11023 is further configured to:
determining a first distance between the position coordinates and the point cloud line features and determining a second distance between the position coordinates and the point cloud surface features;
and performing least square processing on the first distance and the second distance to acquire the first pose.
In some embodiments of the present disclosure, the positioning sub-module 11023 is further configured to:
and inputting the target map and the second track information into a visual positioning model together to obtain the second pose output by the visual positioning model.
It is understood that the map data generating device 110 in the drawing of the present embodiment and the map data generating device 100 in the foregoing embodiment, the third determining module 1101 and the third determining module 1001 in the foregoing embodiment, and the fourth determining module 1102 and the fourth determining module 1002 in the foregoing embodiment may have the same functions and structures.
It should be noted that the foregoing explanation of the method for determining the vehicle pose also applies to the apparatus for determining the vehicle pose of the present embodiment.
In this embodiment, by determining first track information and/or target map data, where the first track information describes a track situation of a first vehicle in an occurrence location area of a set event, the set event has occurred in a driving scene of the first vehicle, and the target map data is generated by the first vehicle according to the first track information, and determines a target pose of a second vehicle according to the first track information and/or the target map data, the second vehicle can be positioned based on target map data, which is generated by the first vehicle and is adapted to the current driving scene of the vehicle, so that the target pose of the second vehicle obtained by determination can effectively meet a vehicle positioning requirement in the current driving scene of the vehicle, and a determination effect of the vehicle pose is effectively improved.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
Fig. 12 shows a schematic block diagram of an example electronic device to implement a map data generation method, or a vehicle pose determination method of an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 12, the apparatus 1200 includes a computing unit 1201 that can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM) 1202 or a computer program loaded from a storage unit 1208 into a Random Access Memory (RAM) 1203. In the RAM 1203, various programs and data required for the operation of the device 1200 may also be stored. The computing unit 1201, the ROM 1202, and the RAM 1203 are connected to each other by a bus 1204. An input/output (I/O) interface 1205 is also connected to bus 1204.
Various components in the device 1200 are connected to the I/O interface 1205 including: an input unit 1206 such as a keyboard, a mouse, or the like; an output unit 1207 such as various types of displays, speakers, and the like; a storage unit 1208, such as a magnetic disk, optical disk, or the like; and a communication unit 1209 such as a network card, modem, wireless communication transceiver, etc. The communication unit 1209 allows the device 1200 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 1201 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 1201 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 1201 executes the respective methods and processes described above, such as the map data generation method, or the determination method of the vehicle pose. For example, in some embodiments, the map data generation method, or the vehicle pose determination method, may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 1208. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 1200 via the ROM 1202 and/or the communication unit 1209. When the computer program is loaded into the RAM 1203 and executed by the computing unit 1201, one or more steps of the map data generation method described above, or the determination method of the vehicle pose may be executed. Alternatively, in other embodiments, the computing unit 1201 may be configured by any other suitable means (e.g., by means of firmware) to perform a map data generation method, or a determination method of the vehicle pose.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (23)

1. A map data generation method, performed by a first vehicle, the method comprising:
in response to a set event occurring in a driving scene of the first vehicle, determining an occurrence location area corresponding to the set event;
acquiring first track information of the first vehicle in the occurrence position area; and
and generating target map data corresponding to the occurrence position area according to the first track information.
2. The method of claim 1, further comprising:
transmitting the first trajectory information and/or the target map data to a second vehicle.
3. The method of claim 1, further comprising:
collecting a point cloud to be processed of a driving scene of the first vehicle;
acquiring a reference point cloud corresponding to a driving scene of the first vehicle, wherein the reference point cloud is acquired from the driving scene in advance;
and determining whether the set event occurs in the driving scene of the first vehicle according to the point cloud to be processed and the reference point cloud.
4. The method of claim 3, wherein the determining whether the set event occurred in the driving scene of the first vehicle from the point cloud to be processed and the reference point cloud comprises:
determining a mapping position of the first vehicle, wherein the mapping position represents a relative position of a local point cloud of the first vehicle in the point cloud to be processed;
rasterizing the point cloud to be processed to obtain rasterized point cloud to be processed, and rasterizing the reference point cloud to obtain reference rasterized point cloud;
determining a depth value to be processed corresponding to the rasterized point cloud to be processed, and determining a reference depth value corresponding to the reference rasterized point cloud; and
and determining whether the set event occurs in the driving scene of the first vehicle according to the depth value to be processed, the reference depth value and the relative position.
5. The method of claim 4, wherein the determining whether the set event occurs in the driving scene of the first vehicle according to the depth value to be processed, the reference depth value, and the relative position comprises:
determining at least one target rasterized point cloud from the rasterized point cloud to be processed according to the depth value to be processed and the reference depth value;
clustering the at least one target rasterized point cloud to obtain a point cloud cluster to be processed;
and determining whether the set event occurs in the driving scene of the first vehicle according to the relative position and the point cloud cluster to be processed.
6. The method of claim 5, wherein the determining at least one target rasterized point cloud from the rasterized point cloud to be processed as a function of the depth value to be processed and the reference depth value comprises:
determining a depth difference value between the depth value to be processed and the reference depth value;
and if the depth difference value is smaller than a difference threshold value, determining the rasterized point cloud to be processed corresponding to the depth value to be processed as the target rasterized point cloud.
7. The method of claim 5, wherein the determining whether the set event occurred in the driving scene of the first vehicle according to the relative position and the point cloud cluster to be processed comprises:
acquiring the point cloud density and the cluster size of the point cloud cluster to be processed;
determining a reference position of the point cloud cluster to be processed in the point cloud to be processed, wherein the reference position represents the relative position of the point cloud cluster to be processed in the point cloud to be processed;
determining a location distance between the mapped location and the reference location;
and determining whether the set event occurs in the driving scene of the first vehicle according to the point cloud density, the cluster size and the position distance.
8. The method of claim 7, wherein the determining whether the set event occurred in the driving scene of the first vehicle as a function of the point cloud density, the cluster size, and the location distance comprises:
determining that the set event occurs in the driving scene of the first vehicle if the point cloud density is less than or equal to a density threshold, the cluster size is greater than a size threshold, and the location distance is less than or equal to a distance threshold;
if the point cloud density is greater than the density threshold, or the cluster size is less than or equal to the size threshold, or the location distance is greater than the distance threshold, it is determined that the set event does not occur in the driving scene of the first vehicle.
9. The method of claim 1, the first vehicle comprising: a pose detection sensor, the first trajectory information being based on the pose detection sensor detection;
wherein the generating of the target map data corresponding to the occurrence location area according to the first trajectory information includes:
determining sensing parameter information of the pose detection sensor;
acquiring at least one point data in the occurrence position area and/or an area image of the occurrence position area;
generating a target point cloud corresponding to the occurrence position area according to the first track information, the sensing parameter information and the at least one point data; and/or
Generating a target map corresponding to the occurrence position area according to the first track information, the sensing parameter information and the area image;
the target point cloud and/or the target map are/is used together as the target map data.
10. The method of claim 9, the first vehicle further comprising: the point cloud acquisition equipment is used for acquiring the at least one point data;
wherein generating a target point cloud corresponding to the occurrence location area according to the first trajectory information, the sensing parameter information, and the at least one point data comprises:
determining equipment parameter information of the point cloud acquisition equipment;
and generating a target point cloud corresponding to the occurrence position area according to the first track information, the sensing parameter information, the equipment parameter information and the at least one point data.
11. The method of claim 9, the first vehicle further comprising: the area image is acquired based on the camera shooting component;
wherein the generating a target map corresponding to the occurrence location area according to the first trajectory information, the sensing parameter information, and the area image includes:
determining shooting parameter information of the shooting assembly;
and generating a target map corresponding to the occurrence position area according to the first track information, the sensing parameter information, the shooting parameter information and the area image.
12. A method of vehicle pose determination performed by a second vehicle, the method comprising:
determining first track information and/or target map data, wherein the first track information describes track conditions of a first vehicle in an occurrence position area of a set event, the set event has occurred in a driving scene of the first vehicle, and the target map data is generated by the first vehicle according to the first track information;
and determining the target pose of the second vehicle according to the first track information and/or the target map data.
13. The method of claim 12, wherein the determining first trajectory information and/or target map data comprises:
and receiving the first track information and/or the target map data sent by the first vehicle.
14. The method of claim 12, wherein the determining the target pose of the second vehicle from the first trajectory information and/or the target map data comprises:
acquiring second track information of the second vehicle;
determining whether the second vehicle passes through the occurrence position area according to the first track information and the second track information;
and if the second vehicle passes through the occurrence position area, positioning the second vehicle according to the second track information and the target map data to obtain the target pose.
15. The method of claim 14, the target map data comprising: a target point cloud and/or a target map;
wherein the positioning the second vehicle according to the second trajectory information and the target map data to obtain the target pose includes:
positioning the second vehicle according to the target point cloud and the second track information to obtain a first pose of the second vehicle; and/or
Positioning the second vehicle according to the target map and the second track information to obtain a second pose of the second vehicle;
and determining the target pose according to the first pose and/or the second pose.
16. The method of claim 15, the second trajectory information comprising: at least one position coordinate of a second vehicle in a world coordinate system;
wherein the positioning the second vehicle according to the target point cloud and the second track information to obtain a first pose of the second vehicle comprises:
identifying point cloud line features and point cloud surface features from the target point cloud, wherein the point cloud line features are used for describing local line information in the target point cloud, and the point cloud surface features are used for describing local area information in the target point cloud;
and determining the first pose according to the position coordinates, the point cloud line characteristics and the point cloud surface characteristics.
17. The method of claim 16, wherein said determining the first pose from the location coordinates, the point cloud line features, and the point cloud surface features comprises:
determining a first distance between the position coordinates and the point cloud line features and determining a second distance between the position coordinates and the point cloud surface features;
and performing least square processing on the first distance and the second distance to acquire the first pose.
18. The method of claim 15, wherein the locating the second vehicle according to the target map and the second trajectory information to obtain a second pose of the second vehicle comprises:
and inputting the target map and the second track information into a visual positioning model together to obtain the second pose output by the visual positioning model.
19. A map data generation apparatus, executed by a first vehicle, the apparatus comprising:
the first determination module is used for responding to a set event occurring in the driving scene of the first vehicle and determining an occurrence position area corresponding to the set event;
the first acquisition module is used for acquiring first track information of the first vehicle in the occurrence position area; and
and the generating module is used for generating target map data corresponding to the occurrence position area according to the first track information.
20. An apparatus for determining a vehicle pose, performed by a second vehicle, the apparatus comprising:
a third determining module, configured to determine first trajectory information and/or target map data, where the first trajectory information describes a trajectory situation of a first vehicle in an occurrence location area of a set event, the set event having occurred in a driving scene of the first vehicle, and the target map data is generated by the first vehicle according to the first trajectory information;
and the fourth determination module is used for determining the target pose of the second vehicle according to the first track information and/or the target map data.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11 or to perform the method of any one of claims 12-18.
22. A non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 1-11 or to perform the method of any one of claims 12-18.
23. A computer program product comprising a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 11 or carries out the steps of the method according to any one of claims 12 to 18.
CN202211521252.XA 2022-11-30 2022-11-30 Map data generation method and device, electronic equipment and storage medium Pending CN115876210A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211521252.XA CN115876210A (en) 2022-11-30 2022-11-30 Map data generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211521252.XA CN115876210A (en) 2022-11-30 2022-11-30 Map data generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115876210A true CN115876210A (en) 2023-03-31

Family

ID=85764944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211521252.XA Pending CN115876210A (en) 2022-11-30 2022-11-30 Map data generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115876210A (en)

Similar Documents

Publication Publication Date Title
EP3581890B1 (en) Method and device for positioning
CN111998860B (en) Automatic driving positioning data verification method and device, electronic equipment and storage medium
CN112541437A (en) Vehicle positioning method and device, electronic equipment and storage medium
CN113989450A (en) Image processing method, image processing apparatus, electronic device, and medium
CN113706704B (en) Method and equipment for planning route based on high-precision map and automatic driving vehicle
CN114034295A (en) High-precision map generation method, device, electronic device, medium, and program product
CN113298910A (en) Method, apparatus and storage medium for generating traffic sign line map
CN114140759A (en) High-precision map lane line position determining method and device and automatic driving vehicle
CN112509126A (en) Method, device, equipment and storage medium for detecting three-dimensional object
CN114743178A (en) Road edge line generation method, device, equipment and storage medium
CN114299242A (en) Method, device and equipment for processing images in high-precision map and storage medium
CN113887391A (en) Method and device for recognizing road sign and automatic driving vehicle
CN113932796A (en) High-precision map lane line generation method and device and electronic equipment
CN114111813A (en) High-precision map element updating method and device, electronic equipment and storage medium
CN113177980A (en) Target object speed determination method and device for automatic driving and electronic equipment
CN115790621A (en) High-precision map updating method and device and electronic equipment
CN115773759A (en) Indoor positioning method, device and equipment of autonomous mobile robot and storage medium
CN115937449A (en) High-precision map generation method and device, electronic equipment and storage medium
CN116052097A (en) Map element detection method and device, electronic equipment and storage medium
CN115909253A (en) Target detection and model training method, device, equipment and storage medium
CN115876210A (en) Map data generation method and device, electronic equipment and storage medium
CN114495049A (en) Method and device for identifying lane line
CN114429631A (en) Three-dimensional object detection method, device, equipment and storage medium
CN114419564A (en) Vehicle pose detection method, device, equipment, medium and automatic driving vehicle
CN114708498A (en) Image processing method, image processing apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination