CN115937823A - Method, apparatus, electronic device, and medium for detecting obstacle - Google Patents

Method, apparatus, electronic device, and medium for detecting obstacle Download PDF

Info

Publication number
CN115937823A
CN115937823A CN202211701597.3A CN202211701597A CN115937823A CN 115937823 A CN115937823 A CN 115937823A CN 202211701597 A CN202211701597 A CN 202211701597A CN 115937823 A CN115937823 A CN 115937823A
Authority
CN
China
Prior art keywords
dimensional
obstacle
data
lane line
line data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202211701597.3A
Other languages
Chinese (zh)
Inventor
尹佳成
陈文洋
王玉斌
常松涛
颜学术
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202211701597.3A priority Critical patent/CN115937823A/en
Publication of CN115937823A publication Critical patent/CN115937823A/en
Withdrawn legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The disclosure provides a method, a device, electronic equipment and a medium for detecting obstacles, and relates to the technical field of automatic driving and intelligent traffic. The implementation scheme is as follows: acquiring perception data of an automatic driving vehicle, wherein the perception data comprises two-dimensional obstacle data, three-dimensional obstacle data, two-dimensional lane line data and three-dimensional lane line data, and the two-dimensional obstacle data, the three-dimensional obstacle data, the two-dimensional lane line data and the three-dimensional lane line data are synchronized in time; determining a first position coordinate of the obstacle in a world coordinate system and a second position coordinate of the lane line in the world coordinate system based on the three-dimensional obstacle data and the three-dimensional lane line data; determining the relative position relation of the obstacles and the lane lines in the two-dimensional space based on the two-dimensional obstacle data and the two-dimensional lane line data; and correcting the first position coordinates based on the relative position relationship and the second position coordinates.

Description

Method, apparatus, electronic device, and medium for detecting obstacle
Technical Field
The present disclosure relates to the field of autonomous driving and intelligent transportation technologies, and in particular, to a method, an apparatus, an electronic device, a computer-readable storage medium, and a computer program product for detecting an obstacle.
Background
In the field of autonomous driving, the generation of obstacle location information is of great significance to the determination of the intent of an autonomous vehicle. Considering the cost factor, the peripheral environment obstacle detection is mainly carried out by using monocular vision at present so as to obtain the three-dimensional position of the obstacle in the real world. However, the deep learning model has certain limitations for different data and different environments, which may cause a certain error in the three-dimensional position of the obstacle, and may present a certain challenge to the intent determination of the autonomous vehicle, so improving the three-dimensional position accuracy of the obstacle is especially important for the intent determination of the autonomous vehicle and obstacle avoidance.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been acknowledged in any prior art, unless otherwise indicated.
Disclosure of Invention
The present disclosure provides a method, an apparatus, an electronic device, a computer-readable storage medium, and a computer program product for detecting an obstacle.
According to an aspect of the present disclosure, there is provided a method for detecting an obstacle, including: acquiring perception data of an autonomous vehicle, wherein the perception data comprises two-dimensional obstacle data, three-dimensional obstacle data, two-dimensional lane line data and three-dimensional lane line data, and wherein the two-dimensional obstacle data, the three-dimensional obstacle data, the two-dimensional lane line data and the three-dimensional lane line data are time-synchronized; determining a first position coordinate of the obstacle in a world coordinate system and a second position coordinate of the lane line in the world coordinate system based on the three-dimensional obstacle data and the three-dimensional lane line data; determining a relative positional relationship of the obstacle and the lane line in a two-dimensional space based on the two-dimensional obstacle data and the two-dimensional lane line data; and correcting the first position coordinates based on the relative position relationship and the second position coordinates.
According to another aspect of the present disclosure, there is provided an apparatus for detecting an obstacle, including: an acquisition module configured to acquire perception data of an autonomous vehicle, wherein the perception data includes two-dimensional obstacle data, three-dimensional obstacle data, two-dimensional lane line data, and three-dimensional lane line data, and wherein the two-dimensional obstacle data, the three-dimensional obstacle data, the two-dimensional lane line data, and the three-dimensional lane line data are time-synchronized; a first determination module configured to determine a first position coordinate of the obstacle in a world coordinate system and a second position coordinate of the lane line in the world coordinate system based on the three-dimensional obstacle data and the three-dimensional lane line data; a second determination module configured to determine a relative positional relationship of the obstacle and the lane line in the two-dimensional space based on the two-dimensional obstacle data and the two-dimensional lane line data; and a correction module configured to correct the first position coordinate based on the relative position relationship and the second position coordinate.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the above method.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the above method.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program, wherein the computer program realizes the above-mentioned method when executed by a processor.
According to one or more embodiments of the present disclosure, a method for detecting an obstacle is provided, which can improve the accuracy of recognizing an obstacle by calibrating the position of the obstacle in a three-dimensional space based on the relative position of a lane line and the obstacle in the two-dimensional space.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, according to an embodiment of the present disclosure;
fig. 2 shows a flow chart of a method for detecting an obstacle according to an embodiment of the present disclosure;
fig. 3 shows a schematic diagram of a method for detecting an obstacle according to an embodiment of the present disclosure;
fig. 4 shows a schematic diagram of a method for detecting an obstacle according to an embodiment of the present disclosure;
fig. 5 shows a schematic diagram of a method for detecting an obstacle according to an embodiment of the present disclosure;
fig. 6 shows a block diagram of an apparatus for detecting an obstacle according to an embodiment of the present disclosure; and
FIG. 7 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", and the like to describe various elements is not intended to limit the positional relationship, the temporal relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing the particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the element may be one or a plurality of. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
In the correlation technique, the three-dimensional position precision of the barrier is improved by adopting a higher-precision sensor such as the fusion of a laser radar and a camera, the barrier position is directly obtained by utilizing the laser radar, and the problem of inaccurate position detection of the pure visual barrier is effectively avoided. However, the lidar has high use cost and high requirement on computing power, so that the use scene of the lidar is limited.
In order to solve the above problems, the present disclosure provides a method for detecting an obstacle, which calibrates a position of the obstacle in a three-dimensional space based on a relative position of a lane line and the obstacle in the two-dimensional space, and since a feature of the lane line is strong, a sensing result of an autonomous vehicle to the lane line is accurate, and the position of the obstacle in the three-dimensional space is calibrated using an accurate lane line recognition result, so that a recognition accuracy of the obstacle can be improved.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an example system 100 in which various methods and apparatus described herein may be implemented in accordance with embodiments of the present disclosure. Referring to fig. 1, the system 100 includes a motor vehicle 110, a server 120, and one or more communication networks 130 coupling the motor vehicle 110 to the server 120.
In embodiments of the present disclosure, motor vehicle 110 may include a computing device and/or be configured to perform a method in accordance with embodiments of the present disclosure.
The server 120 may run one or more services or software applications that enable the method for detecting obstacles. In some embodiments, the server 120 may also provide other services or software applications, which may include non-virtual environments and virtual environments. In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof, which may be executed by one or more processors. A user of motor vehicle 110 may, in turn, utilize one or more client applications to interact with server 120 to take advantage of the services provided by these components. It should be understood that a variety of different system configurations are possible, which may differ from system 100. Accordingly, fig. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture involving virtualization (e.g., one or more flexible pools of logical storage that may be virtualized to maintain virtual storage for the server). In various embodiments, the server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above, as well as any commercially available server operating systems. The server 120 can also run any of a variety of additional server applications and/or mid-tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, and the like.
In some embodiments, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from motor vehicle 110. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of motor vehicle 110.
Network 130 may be any type of network known to those skilled in the art that may support data communications using any of a variety of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, the one or more networks 130 may be a satellite communication network, a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a blockchain network, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (including, for example, bluetooth, wiFi), and/or any combination of these and other networks.
The system 100 may also include one or more databases 150. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 150 may be used to store information such as audio files and video files. The data store 150 may reside in various locations. For example, the data store used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data store 150 may be of different types. In certain embodiments, the data store used by the server 120 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve data to and from the database in response to the command.
In some embodiments, one or more of the databases 150 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key-value stores, object stores, or regular stores supported by a file system.
Motor vehicle 110 may include sensors 111 for sensing the surrounding environment. The sensors 111 may include one or more of the following sensors: visual cameras, infrared cameras, ultrasonic sensors, millimeter wave radar, and laser radar (LiDAR). Different sensors may provide different detection accuracies and ranges. The camera may be mounted in front of, behind, or otherwise on the vehicle. The visual camera may capture conditions inside and outside the vehicle in real time and present to the driver and/or passengers. In addition, by analyzing the pictures captured by the visual camera, information such as traffic signal light indication, intersection situation, other vehicle running state, and the like can be acquired. The infrared camera can capture objects under night vision conditions. The ultrasonic sensors can be arranged around the vehicle and used for measuring the distance between an object outside the vehicle and the vehicle by utilizing the characteristics of strong ultrasonic directionality and the like. The millimeter wave radar may be installed in front of, behind, or other positions of the vehicle for measuring the distance of an object outside the vehicle from the vehicle using the characteristics of electromagnetic waves. The lidar may be mounted in front of, behind, or otherwise of the vehicle for detecting object edges, shape information, and thus object identification and tracking. The radar apparatus can also measure a speed variation of the vehicle and the moving object due to the doppler effect.
Motor vehicle 110 may also include a communication device 112. The communication device 112 may include a satellite positioning module capable of receiving satellite positioning signals (e.g., beidou, GPS, GLONASS, and GALILEO) from the satellites 141 and generating coordinates based on these signals. The communication device 112 may also include modules to communicate with a mobile communication base station 142, and the mobile communication network may implement any suitable communication technology, such as current or evolving wireless communication technologies (e.g., 5G technologies) like GSM/GPRS, CDMA, LTE, etc. The communication device 112 may also have a Vehicle-to-Vehicle (V2X) networking or Vehicle-to-anything (V2X) module configured to enable, for example, vehicle-to-Vehicle (V2V) communication with other vehicles 143 and Vehicle-to-Infrastructure (V2I) communication with Infrastructure 144. Further, the communication device 112 may also have a module configured to communicate with a user terminal 145 (including but not limited to a smartphone, tablet, or wearable device such as a watch), for example, by wireless local area network using IEEE802.11 standards or bluetooth. Motor vehicle 110 may also access server 120 via network 130 using communication device 112.
Motor vehicle 110 may also include a control device 113. The control device 113 may include a processor, such as a Central Processing Unit (CPU) or a Graphics Processing Unit (GPU), or other special purpose processor, etc., in communication with various types of computer-readable storage devices or media. The control device 113 may include an autopilot system for automatically controlling various actuators in the vehicle. The autopilot system is configured to control a powertrain, steering system, and braking system, etc., of a motor vehicle 110 (not shown) via a plurality of actuators in response to inputs from a plurality of sensors 111 or other input devices to control acceleration, steering, and braking, respectively, without human intervention or limited human intervention. Part of the processing functions of the control device 113 may be realized by cloud computing. For example, some processing may be performed using an onboard processor while other processing may be performed using the computing resources in the cloud. The control device 113 may be configured to perform a method according to the present disclosure. Furthermore, the control apparatus 113 may be implemented as one example of a computing device on the motor vehicle side (client) according to the present disclosure.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
Fig. 2 shows a flow chart of a method for detecting an obstacle according to an embodiment of the present disclosure.
As shown in fig. 2, a method 200 for detecting an obstacle includes:
step S201, obtaining perception data of an automatic driving vehicle, wherein the perception data comprises two-dimensional obstacle data, three-dimensional obstacle data, two-dimensional lane line data and three-dimensional lane line data, and time synchronization is performed among the two-dimensional obstacle data, the three-dimensional obstacle data, the two-dimensional lane line data and the three-dimensional lane line data;
step S202, determining a first position coordinate of the obstacle in a world coordinate system and a second position coordinate of the lane line in the world coordinate system based on the three-dimensional obstacle data and the three-dimensional lane line data;
step S203, determining the relative position relation between the obstacle and the lane line in a two-dimensional space based on the two-dimensional obstacle data and the two-dimensional lane line data; and
and S204, correcting the first position coordinate based on the relative position relation and the second position coordinate.
Thus, the time-synchronized obstacle data and lane line data in the two-dimensional space and the three-dimensional space are obtained through step S201, and the obstacle and lane line coordinates in the three-dimensional space are converted into the position coordinates in the world coordinate system through step S202, so that the position coordinates of the obstacle in the three-dimensional space in the world coordinate system are subsequently calibrated based on the coordinates of the lane line. And calibrating the position of the obstacle in the three-dimensional space based on the relative position of the lane line and the obstacle in the two-dimensional space through the step S203 and the step S204, wherein the lane line is relatively high in characteristic, so that the sensing result of the automatic driving vehicle on the lane line is relatively accurate, the position coordinate of the relatively accurate lane line is used for calibrating the position of the obstacle in the three-dimensional space, the identification precision of the obstacle can be effectively improved, and the intention judgment accuracy of the automatic driving vehicle is further improved.
According to some embodiments, the two-dimensional obstacle data, the three-dimensional obstacle data, the two-dimensional lane line data, and the three-dimensional lane line data in the perception data are each time-stamped, and the two-dimensional obstacle data, the three-dimensional obstacle data, the two-dimensional lane line data, and the three-dimensional lane line data are time-synchronized based on the time stamps.
In one example, for two-dimensional data and three-dimensional data, lane line data with the time closest to the time stamp of the obstacle data can be acquired according to the time stamp in the perception data, so that the time synchronization of the obstacle data and the lane line data can be realized.
For example, after the obstacle data and the lane line data are time-synchronized according to the timestamp, the positioning data of the autonomous vehicle at the current timestamp may be acquired to obtain the body posture, the speed, and other information of the autonomous vehicle.
According to some embodiments, step S202 comprises: acquiring positioning data of the autonomous vehicle; and determining the first position coordinate of the obstacle in a world coordinate system and the second position coordinate of the lane line in the world coordinate system based on the positioning data, the three-dimensional obstacle data and the three-dimensional lane line data.
It can be understood that the positions of the obstacle and the lane line in the sensing data of the autonomous vehicle are relative positions with respect to the autonomous vehicle, and the time-synchronized three-dimensional obstacle data and the three-dimensional lane line data can be subjected to coordinate conversion according to the positioning data of the autonomous vehicle, so that the three-dimensional obstacle and the three-dimensional lane line data are converted from a local coordinate system to a world coordinate system, so as to facilitate subsequent calibration calculation, that is, calibration of absolute position coordinates of the obstacle in a three-dimensional space in the world coordinate system.
The position of the obstacle in three-dimensional space can be calibrated in a number of ways based on the relative position of the lane line and the obstacle in two-dimensional space. A description will be given below of different ways of calibration by means of a number of embodiments.
According to some embodiments, step S203 comprises: determining, based on the two-dimensional obstacle data and the two-dimensional lane line data, a closest intersection point of the obstacle with the lane line in a lateral direction in a two-dimensional space and a first longitudinal distance between the obstacle and the autonomous vehicle; determining a first lateral distance between the obstacle and the nearest intersection point in a lateral direction; step S204 includes: obtaining a second longitudinal distance in three-dimensional space between the autonomous vehicle and the obstacle; determining a second lateral distance between closest intersections of the obstacle and the lane line in three-dimensional space based on the first longitudinal distance, the first lateral distance, and the second longitudinal distance; and correcting the first position coordinate based on the second longitudinal distance, the second lateral distance, and the second position coordinate.
In the present disclosure, the longitudinal direction is a traveling direction of the autonomous vehicle, and correspondingly, the lateral direction is a direction perpendicular to the traveling direction of the autonomous vehicle. The above embodiments will be specifically described below with reference to the accompanying drawings.
Fig. 3 shows a schematic diagram of a method for detecting an obstacle according to an embodiment of the present disclosure. As shown in fig. 3, the left side of fig. 3 is a two-dimensional image determined based on two-dimensional obstacle data and two-dimensional lane line data, 301 being a two-dimensional lane line, 304 being a two-dimensional obstacle; the right side is a three-dimensional image determined based on the three-dimensional obstacle data and the three-dimensional lane line data, 302 is a three-dimensional lane line, and 305 is a three-dimensional obstacle. 303 is an autonomous vehicle.
In the two-dimensional image in fig. 3, point b represents the position of the obstacle, and the intersection point a closest to the lane line in the lateral direction of the obstacle can be determined as a, and the first lateral distance between the intersection point a closest to the obstacle in the lateral direction can be determined as ab; a first longitudinal distance between the obstacle and the autonomous vehicle can be determined from the perception of the obstacle by the autonomous vehicle (not shown in the figure) and represented by bc (point c not shown in the figure), thereby determining the relative position of the lane line and the obstacle in two-dimensional space.
In the three-dimensional image in fig. 3, point B represents the position of the obstacle, a second longitudinal distance between the autonomous vehicle and the obstacle in the three-dimensional space can be determined by the perception of the obstacle by the autonomous vehicle 303, and the second longitudinal distance is represented by BC (point C is not shown in the figure), after the length of BC is determined, the longitudinal position of the obstacle in the three-dimensional space is determined, and thus the intersection point between the obstacle and the three-dimensional lane line can be determined as a, it being understood that the longitudinal coordinates of the intersection point a and the obstacle B are the same and the longitudinal coordinate value thereof can be determined based on the value of BC; in the case where AB, BC, and BC are known, the second lateral distance AB between the closest intersection points of the obstacle and the lane line in the three-dimensional space can be determined based on the equal-scale transformation of the two-dimensional image into the three-dimensional image, and since the position of the intersection point a is determined, the position coordinates of the obstacle B can be determined based on the coordinates of the intersection point a and the magnitude of the second lateral distance AB to correct the first position coordinates. Therefore, the position of the barrier in the three-dimensional space is calibrated based on the relative position of the lane line and the barrier in the two-dimensional space, and the identification precision of the three-dimensional barrier is improved.
In the following embodiments, the position of the obstacle in the three-dimensional space may also be corrected based on the relative positions of the two intersections of the obstacle with the lane lines.
According to some embodiments, step S203 comprises: determining two intersection points of the obstacle with the lane line in the two-dimensional space in the transverse direction based on the two-dimensional obstacle data and the two-dimensional lane line data; determining a third transverse distance between the obstacle and the nearest intersection point of the two intersection points in the transverse direction and a fourth transverse distance between the two intersection points; step S204 includes: obtaining a third longitudinal distance in three-dimensional space between the autonomous vehicle and the obstacle; determining a fifth lateral distance between the two intersection points in three-dimensional space based on the third longitudinal distance and the three-dimensional lane line data; and correcting the first position coordinate based on the third lateral distance, the fourth lateral distance, the fifth lateral distance, and the second position coordinate.
As shown in fig. 4, the left side of fig. 4 is a two-dimensional image determined based on two-dimensional obstacle data and two-dimensional lane line data, 401 being a two-dimensional lane line; the right side is a three-dimensional image determined based on the three-dimensional obstacle data and the three-dimensional lane line data, 402 is a three-dimensional lane line, and 403 is an autonomous vehicle.
In the two-dimensional image in fig. 4, point c1 indicates the position of the obstacle, and the two intersections of the obstacle with the lane line in the lateral direction can be determined as a1 and b1, respectively, and further, the third lateral distance c1a1 between the obstacle and the closest intersection a1 of the two intersections in the lateral direction and the fourth lateral distance a1b1 between the two intersections can be determined, thereby determining the relative position of the lane line and the obstacle in the two-dimensional space.
In the three-dimensional image in fig. 4, point C1 represents the position of the obstacle, and a second longitudinal distance of the autonomous vehicle from the obstacle in the three-dimensional space can be determined by the perception of the obstacle by the autonomous vehicle 403, and after the second longitudinal distance is determined, the longitudinal position of the obstacle in the three-dimensional space is determined, and thus it is possible to determine that two intersections between the obstacle and the three-dimensional lane line are A1 and B1, respectively, and it is possible to determine a fifth transverse distance A1B1 between the two intersections of A1 and B1 in the three-dimensional space; in the case where C1A1, A1B1, and A1B1 are known, the abscissa of the obstacle C1 in the three-dimensional space may be determined to correct the first position coordinate by determining the lateral distance C1A1, i.e., C1A1= C1A1/A1B1, between the closest intersection point A1 of the obstacle and the lane line in the three-dimensional space based on the equal-scale conversion of the two-dimensional image into the three-dimensional image. Therefore, the position of the obstacle in the three-dimensional space is calibrated based on the relative position of the lane line and the obstacle in the two-dimensional space, and the identification precision of the three-dimensional obstacle is improved.
In the following embodiments, the position of the obstacle in the three-dimensional space may also be corrected based on the relative position of the obstacle and the one-side lane line with the lateral position of the autonomous vehicle as the center line.
According to some embodiments, step S203 comprises: determining, based on the two-dimensional obstacle data and the two-dimensional lane line data, a closest intersection point of the obstacle with the lane line in a lateral direction in the two-dimensional space and a sixth lateral distance between the obstacle and the autonomous vehicle; determining a seventh lateral distance between the obstacle and the closest intersection point in a lateral direction; step S204 includes: acquiring an eighth lateral distance between the autonomous vehicle and the obstacle in the three-dimensional space; and correcting the first position coordinate based on the sixth lateral distance, the seventh lateral distance, the eighth lateral distance, and the second position coordinate.
As shown in fig. 5, the left side of fig. 5 is a two-dimensional image determined based on two-dimensional obstacle data and two-dimensional lane line data, 501 being a two-dimensional lane line, 502 being an autonomous vehicle; the right side is a three-dimensional image determined based on the three-dimensional obstacle data and the three-dimensional lane line data, 503 is a three-dimensional lane line, and 504 is an autonomous vehicle.
In the two-dimensional image in fig. 5, the point p3 represents the position of the obstacle, and the intersection point a3, which is the closest point of the obstacle to the lane line in the lateral direction, can be determined, and the seventh lateral distance p3a3 between the intersection point a3, which is the closest point of the obstacle in the lateral direction, can be determined; c2, c3 are center lines determined based on the lateral position of the autonomous vehicle 502, and a sixth lateral distance p3c3 between the autonomous vehicle and the obstacle can be determined using the perception of the obstacle by the autonomous vehicle, thereby determining the relative position of the lane line and the obstacle in the two-dimensional space.
In the three-dimensional image of fig. 5, point P3 represents the location of the obstacle, and an eighth lateral distance P3C3 in three-dimensional space between the autonomous vehicle and the obstacle may be determined by the perception of the obstacle by the autonomous vehicle 504; in the case where P3A3, P3C3, and P3C3 are known, the lateral distance P3A3 between the closest intersection point A3 of the obstacle and the lane line in the three-dimensional space, i.e., P3A3= P3 A3P 3C3/P3C3, may be determined based on the equal-scale transformation of the two-dimensional image into the three-dimensional image, thereby determining the abscissa of the obstacle P3 in the three-dimensional space to correct the first position coordinate. Therefore, the position of the barrier in the three-dimensional space is calibrated based on the relative position of the lane line and the barrier in the two-dimensional space, and the identification precision of the three-dimensional barrier is improved.
According to another aspect of the present disclosure, an apparatus for detecting an obstacle is provided. As shown in fig. 6, the apparatus 600 for detecting an obstacle includes: an obtaining module 601 configured to obtain perception data of an autonomous vehicle, wherein the perception data includes two-dimensional obstacle data, three-dimensional obstacle data, two-dimensional lane line data, and three-dimensional lane line data, and wherein the two-dimensional obstacle data, the three-dimensional obstacle data, the two-dimensional lane line data, and the three-dimensional lane line data are time-synchronized; a first determining module 602 configured to determine a first position coordinate of the obstacle in a world coordinate system and a second position coordinate of the lane line in the world coordinate system based on the three-dimensional obstacle data and the three-dimensional lane line data; a second determining module 603 configured to determine a relative positional relationship of the obstacle and the lane line in the two-dimensional space based on the two-dimensional obstacle data and the two-dimensional lane line data; and a correction module 604 configured to correct the first position coordinate based on the relative position relationship and the second position coordinate.
Thus, the time-synchronized obstacle data and lane line data in the two-dimensional space and the three-dimensional space are obtained by the obtaining module 601, and the obstacle and lane line coordinates in the three-dimensional space are converted into position coordinates in the world coordinate system by the first determining module 602, so that the position coordinates of the obstacle in the three-dimensional space in the world coordinate system are subsequently calibrated based on the coordinates of the lane line. The second determining module 603 and the correcting module 604 calibrate the position of the obstacle in the three-dimensional space based on the relative position of the lane line and the obstacle in the two-dimensional space, because the feature of the lane line is strong, the sensing result of the autonomous vehicle to the lane line is accurate, the position coordinate of the lane line is accurate to calibrate the position of the obstacle in the three-dimensional space, the identification precision of the obstacle can be effectively improved, and the intention judgment accuracy of the autonomous vehicle is further improved.
According to some embodiments, the two-dimensional obstacle data, the three-dimensional obstacle data, the two-dimensional lane line data, and the three-dimensional lane line data in the perception data are each time-stamped, and the two-dimensional obstacle data, the three-dimensional obstacle data, the two-dimensional lane line data, and the three-dimensional lane line data are time-synchronized based on the time stamps.
In one example, for two-dimensional data and three-dimensional data, the obtaining module 601 may obtain lane line data closest to the time stamp of the obstacle data according to the time stamp in the perception data, so as to achieve time synchronization of the obstacle data and the lane line data.
For example, after the obtaining module 601 time-synchronizes the obstacle data and the lane line data according to the timestamp, the positioning data of the autonomous vehicle under the current timestamp may be obtained to obtain the body posture, the speed, and other information of the autonomous vehicle.
According to some embodiments, the first determining module 602 comprises: a first acquisition unit configured to acquire positioning data of the autonomous vehicle; and a first determination unit configured to determine the first position coordinates of the obstacle in a world coordinate system and the second position coordinates of the lane line in the world coordinate system based on the positioning data, the three-dimensional obstacle data, and the three-dimensional lane line data.
It can be understood that, the positions of the obstacle and the lane line in the sensing data of the autonomous vehicle are relative positions with respect to the autonomous vehicle, and the first determining module 602 may perform coordinate transformation on the time-synchronized three-dimensional obstacle data and the three-dimensional lane line data according to the positioning data of the autonomous vehicle itself, so as to transform the three-dimensional obstacle and the three-dimensional lane line data from a local coordinate system to a world coordinate system, so as to facilitate subsequent calibration calculation, i.e., calibration of absolute position coordinates of the obstacle in the three-dimensional space in the world coordinate system.
The apparatus 600 for detecting an obstacle may calibrate a position of the obstacle in a three-dimensional space based on a relative position of a lane line and the obstacle in the two-dimensional space in various ways. A description will be given below of different ways of calibrating the apparatus 600 for detecting an obstacle by means of various embodiments.
According to some embodiments, the second determining module 603 comprises: a second determination unit configured to determine, based on the two-dimensional obstacle data and the two-dimensional lane line data, a closest intersection point of the obstacle with the lane line in a lateral direction in the two-dimensional space and determine a first longitudinal distance between the obstacle and the autonomous vehicle; a third determination unit configured to determine a first lateral distance between the obstacle and the closest intersection point in a lateral direction; the correction module 604 includes: a second acquisition unit configured to acquire a second longitudinal distance of the autonomous vehicle from the obstacle in a three-dimensional space; a fourth determination unit configured to determine a second lateral distance, which is closest in the three-dimensional space to the lane line, of the obstacle based on the first longitudinal distance, the first lateral distance, and the second longitudinal distance; and a first correction unit configured to correct the first position coordinate based on the second longitudinal distance, the second lateral distance, and the second position coordinate.
In the present disclosure, the longitudinal direction is a traveling direction of the autonomous vehicle, and correspondingly, the lateral direction is a direction perpendicular to the traveling direction of the autonomous vehicle. Therefore, the position of the obstacle in the three-dimensional space is calibrated based on the relative position of the lane line and the obstacle in the two-dimensional space, and the identification precision of the three-dimensional obstacle is improved.
In the following embodiments, the position of the obstacle in the three-dimensional space may also be corrected based on the relative positions of the two intersections of the obstacle with the lane lines.
According to some embodiments, the second determining module 603 comprises: a fifth determination unit configured to determine two intersection points of the obstacle with the lane line in a lateral direction in the two-dimensional space based on the two-dimensional obstacle data and the two-dimensional lane line data; a sixth determination unit configured to determine a third lateral distance between the obstacle and a closest one of the two intersection points in the lateral direction and a fourth lateral distance between the two intersection points; the correction module 604 includes: a third acquisition unit configured to acquire a third longitudinal distance of the autonomous vehicle from the obstacle in a three-dimensional space; a seventh determining unit configured to determine a fifth lateral distance between the two intersection points in the three-dimensional space based on the third longitudinal distance and the three-dimensional lane line data; and a second correction unit configured to correct the first position coordinate based on the third lateral distance, the fourth lateral distance, the fifth lateral distance, and the second position coordinate. Therefore, the position of the barrier in the three-dimensional space is calibrated based on the relative position of the lane line and the barrier in the two-dimensional space, and the identification precision of the three-dimensional barrier is improved.
In the following embodiments, the position of the obstacle in the three-dimensional space may also be corrected based on the relative position of the obstacle to the one-side lane line with the lateral position of the autonomous vehicle as the center line.
According to some embodiments, the second determining module 603 comprises: an eighth determining unit configured to determine, based on the two-dimensional obstacle data and the two-dimensional lane line data, a closest intersection point of the obstacle with the lane line in the lateral direction in the two-dimensional space and a sixth lateral distance between the obstacle and the autonomous vehicle; a ninth determining unit configured to determine a seventh lateral distance between the obstacle and the closest intersection point in the lateral direction; the correction module 604 includes: a fourth acquisition unit configured to acquire an eighth lateral distance of the autonomous vehicle from the obstacle in the three-dimensional space; and a third correction unit configured to correct the first position coordinate based on the sixth lateral distance, the seventh lateral distance, the eighth lateral distance, and the second position coordinate. Therefore, the position of the barrier in the three-dimensional space is calibrated based on the relative position of the lane line and the barrier in the two-dimensional space, and the identification precision of the three-dimensional barrier is improved.
According to another aspect of the present disclosure, there is also provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method for detecting an obstacle.
According to another aspect of the present disclosure, there is also provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to execute a method for detecting an obstacle.
According to another aspect of the disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program realizes the method for detecting an obstacle when executed by a processor.
As shown in fig. 7, the electronic device 700 includes a computing unit 701, which may perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the electronic device 700 can also be stored. The calculation unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
A number of components in the electronic device 700 are connected to the I/O interface 705, including: an input unit 706, an output unit 707, a storage unit 708, and a communication unit 709. The input unit 706 may be any type of device capable of inputting information to the electronic device 700, and the input unit 706 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote controller. Output unit 707 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. Storage unit 708 may include, but is not limited to, magnetic or optical disks. The communication unit 709 allows the electronic device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers, and/or chipsets, such as bluetooth devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
Computing unit 701 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 701 performs the respective methods and processes described above, such as a method for detecting an obstacle. For example, in some embodiments, the method for detecting an obstacle may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 700 via the ROM 702 and/or the communication unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the method for detecting obstacles described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured by any other suitable means (e.g. by means of firmware) to perform the method for detecting obstacles.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, the various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (15)

1. A method for detecting an obstacle, comprising:
acquiring perception data of an autonomous vehicle, wherein the perception data comprises two-dimensional obstacle data, three-dimensional obstacle data, two-dimensional lane line data and three-dimensional lane line data, and wherein the two-dimensional obstacle data, the three-dimensional obstacle data, the two-dimensional lane line data and the three-dimensional lane line data are time-synchronized;
determining a first position coordinate of the obstacle in a world coordinate system and a second position coordinate of the lane line in the world coordinate system based on the three-dimensional obstacle data and the three-dimensional lane line data;
determining a relative positional relationship of the obstacle and the lane line in a two-dimensional space based on the two-dimensional obstacle data and the two-dimensional lane line data; and
correcting the first position coordinates based on the relative position relationship and the second position coordinates.
2. The method of claim 1, wherein the determining, based on the three-dimensional obstacle data and the three-dimensional lane line data, a first position coordinate of an obstacle in a world coordinate system and a second position coordinate of a lane line in the world coordinate system comprises:
acquiring positioning data of the autonomous vehicle; and
determining the first position coordinate of the obstacle in a world coordinate system and the second position coordinate of the lane line in the world coordinate system based on the positioning data, the three-dimensional obstacle data, and the three-dimensional lane line data.
3. The method according to claim 1 or 2, wherein the determining, based on the two-dimensional obstacle data and the two-dimensional lane line data, a relative positional relationship of the obstacle to the lane line in a two-dimensional space comprises:
determining, based on the two-dimensional obstacle data and the two-dimensional lane line data, a closest intersection point of the obstacle with the lane line in a lateral direction in a two-dimensional space and a first longitudinal distance between the obstacle and the autonomous vehicle;
determining a first lateral distance between the obstacle in a lateral direction and the closest intersection point,
the correcting the first position coordinates based on the relative position relationship and the second position coordinates includes:
obtaining a second longitudinal distance in three-dimensional space between the autonomous vehicle and the obstacle;
determining a second lateral distance between closest intersections of the obstacle and the lane line in three-dimensional space based on the first longitudinal distance, the first lateral distance, and the second longitudinal distance; and
correcting the first position coordinate based on the second longitudinal distance, the second lateral distance, and the second position coordinate.
4. The method according to claim 1 or 2, wherein the determining, based on the two-dimensional obstacle data and the two-dimensional lane line data, a relative positional relationship of the obstacle to the lane line in a two-dimensional space comprises:
determining two intersection points of the obstacle with the lane line in the two-dimensional space in the transverse direction based on the two-dimensional obstacle data and the two-dimensional lane line data;
determining a third transverse distance between the obstacle and a closest one of the two intersections in the transverse direction and a fourth transverse distance between the two intersections,
the correcting the first position coordinates based on the relative position relationship and the second position coordinates includes:
obtaining a third longitudinal distance in three-dimensional space between the autonomous vehicle and the obstacle;
determining a fifth lateral distance between the two intersection points in three-dimensional space based on the third longitudinal distance and the three-dimensional lane line data; and
correcting the first position coordinate based on the third, fourth, fifth, and second position coordinates.
5. The method according to claim 1 or 2, wherein the determining, based on the two-dimensional obstacle data and the two-dimensional lane line data, a relative positional relationship of the obstacle to the lane line in a two-dimensional space comprises:
determining, based on the two-dimensional obstacle data and the two-dimensional lane line data, a closest intersection point of the obstacle with the lane line in a lateral direction in the two-dimensional space and a sixth lateral distance between the obstacle and the autonomous vehicle;
determining a seventh lateral distance between the obstacle and the nearest intersection point in a lateral direction,
the correcting the first position coordinates based on the relative position relationship and the second position coordinates includes:
acquiring an eighth lateral distance between the autonomous vehicle and the obstacle in the three-dimensional space; and
correcting the first position coordinate based on the sixth lateral distance, the seventh lateral distance, the eighth lateral distance, and the second position coordinate.
6. The method according to any one of claims 1-5, wherein the two-dimensional obstacle data, the three-dimensional obstacle data, the two-dimensional lane line data, and the three-dimensional lane line data in the perception data are each time-stamped, and the two-dimensional obstacle data, the three-dimensional obstacle data, the two-dimensional lane line data, and the three-dimensional lane line data are time-synchronized based on the time-stamps.
7. An apparatus for detecting an obstacle, comprising:
an acquisition module configured to acquire perception data of an autonomous vehicle, wherein the perception data includes two-dimensional obstacle data, three-dimensional obstacle data, two-dimensional lane line data, and three-dimensional lane line data, and wherein the two-dimensional obstacle data, the three-dimensional obstacle data, the two-dimensional lane line data, and the three-dimensional lane line data are time-synchronized;
a first determination module configured to determine a first position coordinate of the obstacle in a world coordinate system and a second position coordinate of the lane line in the world coordinate system based on the three-dimensional obstacle data and the three-dimensional lane line data;
a second determination module configured to determine a relative positional relationship of the obstacle and the lane line in the two-dimensional space based on the two-dimensional obstacle data and the two-dimensional lane line data; and
a correction module configured to correct the first position coordinate based on the relative position relationship and the second position coordinate.
8. The apparatus of claim 7, wherein the first determining means comprises:
a first acquisition unit configured to acquire positioning data of the autonomous vehicle; and
a first determination unit configured to determine the first position coordinates of the obstacle in a world coordinate system and the second position coordinates of the lane line in the world coordinate system based on the positioning data, the three-dimensional obstacle data, and the three-dimensional lane line data.
9. The apparatus of claim 7 or 8, wherein the second determining means comprises:
a second determination unit configured to determine, based on the two-dimensional obstacle data and the two-dimensional lane line data, a closest intersection point of the obstacle with the lane line in a lateral direction in the two-dimensional space and determine a first longitudinal distance between the obstacle and the autonomous vehicle;
a third determination unit configured to determine a first lateral distance between the obstacle and the closest intersection point in a lateral direction,
the correction module includes:
a second acquisition unit configured to acquire a second longitudinal distance of the autonomous vehicle from the obstacle in a three-dimensional space;
a fourth determination unit configured to determine a second lateral distance, which is closest in the three-dimensional space to the lane line, of the obstacle based on the first longitudinal distance, the first lateral distance, and the second longitudinal distance; and
a first correction unit configured to correct the first position coordinate based on the second longitudinal distance, the second lateral distance, and the second position coordinate.
10. The apparatus of claim 7 or 8, wherein the second determining means comprises:
a fifth determination unit configured to determine two intersection points of the obstacle with the lane line in a lateral direction in the two-dimensional space based on the two-dimensional obstacle data and the two-dimensional lane line data;
a sixth determination unit configured to determine a third lateral distance between the obstacle and a closest one of the two intersection points and a fourth lateral distance between the two intersection points in the lateral direction,
the correction module includes:
a third acquisition unit configured to acquire a third longitudinal distance of the autonomous vehicle from the obstacle in a three-dimensional space;
a seventh determining unit configured to determine a fifth lateral distance between the two intersection points in the three-dimensional space based on the third longitudinal distance and the three-dimensional lane line data; and
a second correction unit configured to correct the first position coordinate based on the third lateral distance, the fourth lateral distance, the fifth lateral distance, and the second position coordinate.
11. The apparatus of claim 7 or 8, wherein the second determining means comprises:
an eighth determining unit configured to determine, based on the two-dimensional obstacle data and the two-dimensional lane line data, a closest intersection point of the obstacle with the lane line in the lateral direction in the two-dimensional space and a sixth lateral distance between the obstacle and the autonomous vehicle;
a ninth determination unit configured to determine a seventh lateral distance between the obstacle and the closest intersection point in the lateral direction,
the correction module includes:
a fourth acquisition unit configured to acquire an eighth lateral distance of the autonomous vehicle from the obstacle in the three-dimensional space; and
a third correction unit configured to correct the first position coordinate based on the sixth lateral distance, the seventh lateral distance, the eighth lateral distance, and the second position coordinate.
12. The apparatus according to any one of claims 7-11, wherein the two-dimensional obstacle data, the three-dimensional obstacle data, the two-dimensional lane line data, and the three-dimensional lane line data in the perception data are each time-stamped, and the two-dimensional obstacle data, the three-dimensional obstacle data, the two-dimensional lane line data, and the three-dimensional lane line data are time-synchronized based on the time-stamps.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program, wherein the computer program realizes the method of any one of claims 1-6 when executed by a processor.
CN202211701597.3A 2022-12-28 2022-12-28 Method, apparatus, electronic device, and medium for detecting obstacle Withdrawn CN115937823A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211701597.3A CN115937823A (en) 2022-12-28 2022-12-28 Method, apparatus, electronic device, and medium for detecting obstacle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211701597.3A CN115937823A (en) 2022-12-28 2022-12-28 Method, apparatus, electronic device, and medium for detecting obstacle

Publications (1)

Publication Number Publication Date
CN115937823A true CN115937823A (en) 2023-04-07

Family

ID=86649000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211701597.3A Withdrawn CN115937823A (en) 2022-12-28 2022-12-28 Method, apparatus, electronic device, and medium for detecting obstacle

Country Status (1)

Country Link
CN (1) CN115937823A (en)

Similar Documents

Publication Publication Date Title
CN113741485A (en) Control method and device for cooperative automatic driving of vehicle and road, electronic equipment and vehicle
CN113887400B (en) Obstacle detection method, model training method and device and automatic driving vehicle
CN114179832B (en) Lane changing method for automatic driving vehicle
CN110794844A (en) Automatic driving method, device, electronic equipment and readable storage medium
CN115164936A (en) Global pose correction method and device for point cloud splicing in high-precision map manufacturing
CN114047760B (en) Path planning method and device, electronic equipment and automatic driving vehicle
CN114758502A (en) Double-vehicle combined track prediction method and device, electronic equipment and automatic driving vehicle
CN115556769A (en) Obstacle state quantity determination method and device, electronic device and medium
CN115082690B (en) Target recognition method, target recognition model training method and device
CN113850909B (en) Point cloud data processing method and device, electronic equipment and automatic driving equipment
CN114394111A (en) Lane changing method for autonomous vehicle
CN115675528A (en) Automatic driving method and vehicle based on similar scene mining
CN115937823A (en) Method, apparatus, electronic device, and medium for detecting obstacle
CN113920174A (en) Point cloud registration method, device, equipment, medium and automatic driving vehicle
CN115583243B (en) Method for determining lane line information, vehicle control method, device and equipment
CN115019278B (en) Lane line fitting method and device, electronic equipment and medium
CN115235487B (en) Data processing method, device, equipment and medium
CN114283604B (en) Method for assisting in parking a vehicle
CN116311943B (en) Method and device for estimating average delay time of intersection
CN114333405B (en) Method for assisting in parking a vehicle
CN117745842A (en) External parameter calibration method and device, electronic equipment and medium
CN116466685A (en) Evaluation method, device, equipment and medium for automatic driving perception algorithm
CN115900724A (en) Path planning method and device
CN114637456A (en) Method and device for controlling vehicle and electronic equipment
CN115096322A (en) Information processing method and navigation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20230407