CN114212108B - Automatic driving method, device, vehicle, storage medium and product - Google Patents

Automatic driving method, device, vehicle, storage medium and product Download PDF

Info

Publication number
CN114212108B
CN114212108B CN202111640239.1A CN202111640239A CN114212108B CN 114212108 B CN114212108 B CN 114212108B CN 202111640239 A CN202111640239 A CN 202111640239A CN 114212108 B CN114212108 B CN 114212108B
Authority
CN
China
Prior art keywords
vehicle
traffic light
determining
control signal
lane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111640239.1A
Other languages
Chinese (zh)
Other versions
CN114212108A (en
Inventor
章桢
于宁
潘安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Original Assignee
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Connectivity Beijing Technology Co Ltd filed Critical Apollo Intelligent Connectivity Beijing Technology Co Ltd
Priority to CN202111640239.1A priority Critical patent/CN114212108B/en
Publication of CN114212108A publication Critical patent/CN114212108A/en
Application granted granted Critical
Publication of CN114212108B publication Critical patent/CN114212108B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • B60W60/0016Planning or execution of driving tasks specially adapted for safety of the vehicle or its occupants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2555/00Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
    • B60W2555/60Traffic rules, e.g. speed limits or right of way
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/40High definition maps

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present disclosure provides an automatic driving method, which relates to the field of artificial intelligence, and more particularly to the fields of automatic driving, intelligent transportation, vehicle-road coordination, and the like. The method comprises the following steps: acquiring running information of a main vehicle, wherein the running information comprises a planned running track and a vehicle body orientation of the main vehicle; determining a lane controlled by a first traffic light in response to acquiring a control signal of the first traffic light located in front of a host vehicle; and determining whether to determine a driving strategy of the host vehicle based on the control signal based on the included angle between the vehicle body orientation and the extending direction of the lane in response to determining that the host vehicle is located on the lane based on the planned driving track.

Description

Automatic driving method, device, vehicle, storage medium and product
Technical Field
The present disclosure relates to the field of autopilot technology, and in particular to an autopilot method, apparatus, electronic device, autopilot vehicle, computer readable storage medium and computer program product.
Background
An automatic driving vehicle is an intelligent automobile, and is also called a wheeled mobile robot, and mainly depends on an unmanned technology in the automobile, which mainly comprises a computer system, so that the automobile has environment perception, path planning and automatic vehicle control, wherein the unmanned technology comprises artificial intelligence and a high-precision map.
Artificial intelligence is the discipline of studying the process of making a computer mimic certain mental processes and intelligent behaviors (e.g., learning, reasoning, thinking, planning, etc.) of a person, both hardware-level and software-level techniques. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge graph technology and the like.
High-precision maps, also called high-precision maps, are maps used by autopilot vehicles. The high-precision map has accurate vehicle position information and rich road element data information, and can help automobiles to predict complex road surface information such as gradient, curvature, heading and the like, so that potential risks are better avoided.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
The present disclosure provides an autopilot method, apparatus, electronic device, computer readable storage medium and computer program product.
According to an aspect of the present disclosure, there is provided a method for acquiring driving information of a host vehicle, wherein the driving information includes a planned driving trajectory and a vehicle body orientation of the host vehicle; determining a lane controlled by a first traffic light located in front of the host vehicle in response to acquiring a control signal of the first traffic light; and determining whether to determine a driving strategy of the host vehicle based on the control signal based on the included angle between the vehicle body orientation and the extending direction of the lane in response to determining that the host vehicle is located on the lane based on the planned driving track.
According to another aspect of the present disclosure, there is provided an automatic driving apparatus including: an information acquisition unit configured to acquire travel information of a host vehicle, wherein the travel information includes a planned travel locus and a vehicle body orientation of the host vehicle; a first determination unit configured to determine a lane controlled by a first traffic light located in front of the host vehicle in response to acquiring a control signal of the first traffic light; and a second determination unit configured to determine whether to determine a driving strategy of the host vehicle based on the control signal, based on an angle between the vehicle body orientation and an extending direction of the lane, in response to determining that the host vehicle is located on the lane based on the planned driving trajectory.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the preceding claims.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of any one of the above.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program, wherein the computer program when executed by a processor implements the method of any of the above.
According to another aspect of the present disclosure, there is provided an autonomous vehicle including the above-described electronic device.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of an autopilot method in accordance with an embodiment of the present disclosure;
FIG. 3 illustrates an example diagram in which a preset display of colors of a traffic lamp according to an embodiment of the present disclosure may be implemented;
FIG. 4 illustrates a scene graph in which a traffic intersection according to embodiments of the present disclosure may be implemented;
FIGS. 5A and 5B illustrate scene diagrams in which host-vehicle travel according to embodiments of the present disclosure may be implemented;
FIG. 6 illustrates a block diagram of an autopilot in accordance with an embodiment of the present disclosure;
Fig. 7 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another element. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, the system 100 includes a motor vehicle 110, a server 120, and one or more communication networks 130 coupling the motor vehicle 110 to the server 120.
In an embodiment of the present disclosure, motor vehicle 110 may include a computing device in accordance with an embodiment of the present disclosure and/or be configured to perform a method in accordance with an embodiment of the present disclosure.
Server 120 may run one or more services or software applications that enable the method of determining the driving strategy of the host vehicle. In some embodiments, server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user of motor vehicle 110 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server). In various embodiments, server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from motor vehicle 110. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of motor vehicle 110.
Network 130 may be any type of network known to those skilled in the art that may support data communications using any of a number of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, the one or more networks 110 may be a satellite communications network, a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (including, for example, bluetooth, wiFi), and/or any combination of these with other networks.
The system 100 may also include one or more databases 150. In some embodiments, these databases may be used to store data and other information. For example, one or more of databases 150 may be used to store information such as audio files and video files. The data store 150 may reside in various locations. For example, the data store used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data store 150 may be of different types. In some embodiments, the data store used by server 120 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 150 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
Motor vehicle 110 may include a sensor 111 for sensing the surrounding environment. The sensors 111 may include one or more of the following: visual cameras, infrared cameras, ultrasonic sensors, millimeter wave radar, and laser radar (LiDAR). Different sensors may provide different detection accuracy and range. The camera may be mounted in front of, behind or other locations on the vehicle. The vision cameras can capture the conditions inside and outside the vehicle in real time and present them to the driver and/or passengers. In addition, by analyzing the captured images of the visual camera, information such as traffic light indication, intersection situation, other vehicle running state, etc. can be acquired. The infrared camera can capture objects under night vision. The ultrasonic sensor can be arranged around the vehicle and is used for measuring the distance between an object outside the vehicle and the vehicle by utilizing the characteristics of strong ultrasonic directivity and the like. The millimeter wave radar may be installed in front of, behind, or other locations of the vehicle for measuring the distance of an object outside the vehicle from the vehicle using the characteristics of electromagnetic waves. Lidar may be mounted in front of, behind, or other locations on the vehicle for detecting object edges, shape information for object identification and tracking. The radar apparatus may also measure a change in the speed of the vehicle and the moving object due to the doppler effect.
Motor vehicle 110 may also include a communication device 112. The communication device 112 may include a satellite positioning module capable of receiving satellite positioning signals (e.g., beidou, GPS, GLONASS, and GALILEO) from satellites 141 and generating coordinates based on these signals. The communication device 112 may also include a module for communicating with the mobile communication base station 142, and the mobile communication network may implement any suitable communication technology, such as the current or evolving wireless communication technology (e.g., 5G technology) such as GSM/GPRS, CDMA, LTE. The communication device 112 may also have a Vehicle-to-Everything (V2X) module configured to enable, for example, vehicle-to-Vehicle (V2V) communication with other vehicles 143 and Vehicle-to-Infrastructure (V2I) communication with Infrastructure 144. In addition, the communication device 112 may also have a module configured to communicate with a user terminal 145 (including but not limited to a smart phone, tablet computer, or wearable device such as a watch), for example, by using a wireless local area network or bluetooth of the IEEE802.11 standard. With the communication device 112, the motor vehicle 110 can also access the server 120 via the network 130.
Motor vehicle 110 may also include a control device 113. The control device 113 may include a processor, such as a Central Processing Unit (CPU) or a Graphics Processing Unit (GPU), or other special purpose processor, etc., in communication with various types of computer readable storage devices or mediums. The control device 113 may include an autopilot system for automatically controlling various actuators in the vehicle. The autopilot system is configured to control a powertrain, steering system, braking system, etc. of a motor vehicle 110 (not shown) via a plurality of actuators in response to inputs from a plurality of sensors 111 or other input devices to control acceleration, steering, and braking, respectively, without human intervention or limited human intervention. Part of the processing functions of the control device 113 may be implemented by cloud computing. For example, some of the processing may be performed using an onboard processor while other processing may be performed using cloud computing resources. The control device 113 may be configured to perform a method according to the present disclosure. Furthermore, the control means 113 may be implemented as one example of a computing device on the motor vehicle side (client) according to the present disclosure.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
Fig. 2 is a flowchart illustrating an automatic driving method according to an exemplary embodiment of the present disclosure.
As shown in fig. 2, the method includes: step S201, acquiring running information of a main vehicle, wherein the running information comprises a planned running track and a vehicle body orientation of the main vehicle; step S202, responding to the control signal of a first traffic light positioned in front of the main car, and determining a lane controlled by the first traffic light; and step S203, in response to the fact that the main vehicle is located on the lane based on the planned driving track, determining whether to determine a driving strategy of the main vehicle based on the control signal based on the included angle between the vehicle body orientation and the extending direction of the lane.
In recent years, with the development of technology, more and more wide-angle optical sensors are applied in automatic driving vehicles. But in the process of collecting information of the traffic intersection, the wide-angle optical sensor can sample a plurality of traffic lights at the same time. In order to properly use the sampled traffic light information, it is necessary to distinguish between individual traffic lights. The traffic light monitoring method and the traffic light monitoring system can judge which traffic light is responsible for the lane where the host vehicle is located under the condition that a plurality of traffic lights are sampled, so that the driving strategy can be determined for the host vehicle more accurately.
According to some embodiments, the determining whether to determine a driving strategy of the host vehicle based on the control signal comprises: and determining that the driving strategy of the host vehicle is not determined based on the control signal in response to the included angle being greater than a threshold.
According to the embodiment of the disclosure, when a plurality of traffic lights are sampled, the situation that the host vehicle runs according to the indication of the wrong traffic light is avoided.
According to some embodiments, further comprising: in response to determining that a second traffic light is arranged in a traffic intersection where the first traffic light is located, determining a control logic relationship between the first traffic light and the second traffic light based on a topological structure between a lane controlled by the first traffic light and a lane controlled by the second traffic light; determining a control signal of the second traffic light based on the control logic relationship and the control signal of the first traffic light; and determining a driving strategy of the host vehicle based on the determined control signal of the second traffic light.
In the process of formulating the driving strategy of the main vehicle, surrounding environment information needs to be acquired so as to avoid accidents. For example, in the course of making a left turn, it is necessary to ensure that all traffic lights except the left turn traffic light are red, so as to avoid collision with the vehicle in other directions.
It will be appreciated that for cost savings, autonomous vehicles typically incorporate an optical sensor for sampling traffic light information ahead of the direction of travel. For traffic lights that cannot be sampled directly, inferences can be made based on the positional relationship between the traffic lights.
For example, in the high-definition map, the front traffic light and the rear traffic light are opposed to each other across the traffic intersection. If the front traffic light is green, the opposite rear traffic light is also green.
The host vehicle, for example, distinguishes other lanes according to the high-definition map with the sampled traffic light and the lane controlled by the traffic light as references. According to the included angles of other lanes relative to the reference lane, the lanes with included angles within a preset value range (for example, less than 30 degrees or more than 150 degrees) are set as parallel lanes, and the lanes not within the preset value range are set as perpendicular lanes. Inferring traffic light information for parallel lanes and perpendicular lanes based on the following control logic relationship: the traffic light information of the parallel lanes is the same as the traffic light information of the reference lane; and traffic light information of the vertical lane is opposite to traffic light information of the reference lane.
In one embodiment, the opposite lane is set to be parallel lane for the lane of the current position of the host vehicle, which is at an angle of less than 30 degrees to the opposite lane. When the traffic light on the lane at the current position of the host vehicle is a red light, the traffic light on the opposite lane is also a red light. It can be understood that the above method is applicable to the yellow light, the red light, the green light, and the like, and will not be described herein.
In one embodiment, the lateral lane is set to be a vertical lane for the lane of the current position of the host vehicle at an angle greater than 30 degrees to the lateral lane. When the traffic light on the lane at the current position of the main vehicle is a red light, the traffic light on the opposite lane is a green light. It can be understood that the above method is applicable to the yellow light, the red light, the green light, and the like, and will not be described herein.
In one embodiment, the host vehicle is at a left turn protection intersection. Left turn protection road mouth refers to: when the left turn traffic light turns on a green light, the rest traffic lights are red lights so as to ensure that no obstruction of other vehicles exists when the main vehicle passes through the intersection. When the left turn traffic light sampled by the main vehicle to the left turn protection intersection is green light, the rest traffic lights can be inferred to be red light.
In one embodiment, the host vehicle predicts a travel track of the obstacle vehicle. When the running track of the obstacle vehicle is partially or completely overlapped with the planned running track of the host vehicle, the traffic lights of the obstacle vehicle and the host vehicle are controlled in a reasoning manner.
In one embodiment, when the traffic light is yellow flashing or black, the traffic light is in a fault state, the traffic intersection corresponding to the traffic light cannot be used, and it is inferred that other traffic lights are yellow flashing or black.
In one embodiment, when the obstacle vehicle is confirmed to be on the lane corresponding to the traffic light of the main vehicle, the obstacle vehicle and the traffic light of the main vehicle can be confirmed to be controlled to be the same without reasoning, so that calculation resources required by reasoning are saved.
In one embodiment, traffic light information for controlling an obstacle vehicle is determined in the following order: determining whether the obstacle vehicle is positioned on a lane corresponding to the traffic light of the main vehicle; determining whether the running track of the obstacle vehicle and the planned running track of the main vehicle are partially overlapped or completely overlapped; determining whether the main vehicle is positioned on a left turning lane of a left turning protection intersection, and the left turning traffic light is a green light; the other lanes are determined to be parallel lanes or perpendicular lanes.
According to some embodiments, the determining the control signal of the second traffic light comprises: determining a first duration for which the control signal of the first traffic light will continue; and determining a second duration for which the control signal of the second traffic lamp will continue based on the first time and the control logic relationship.
In the process of formulating the driving strategy of the host vehicle, the residual duration is considered, so that the driving strategy of the host vehicle can be safer. For example, if the remaining time period is short, it is indicated that the traffic light is immediately changed from green to red. At this time, the vehicle needs to be decelerated in advance, so that the situation of traffic accidents caused by untimely braking is avoided.
The high-precision map of the host vehicle stores preset time length for displaying each color by the traffic light, namely the maximum time length for the traffic light to last each color. Fig. 3 shows a schematic diagram of a preset time period stored by the host vehicle and showing each color. As shown in fig. 3, the blank portion is green light duration 301, the slashed portion is yellow light duration 302, and the black portion is red light duration 303.
In one embodiment, the host vehicle continuously samples traffic light information during driving, and the traffic light changes. For example, during a sampling process, when a traffic light changes from a red light to a green light, the time at which the change of light occurred is recorded and a timer is started. And then, the host vehicle judges the duration of the green light according to the duration of the green light, and calculates the residual duration of the green light when the duration of the green light is reached 301. Further, other traffic light colors and remaining durations can be inferred through the topological structure and control logic relationship among the traffic lights. For example, for two traffic lights facing across a traffic intersection, the color and remaining time period of the two traffic lights are determined to be equal by a control logic relationship.
In one embodiment, no change in traffic light occurs during the continuous sampling of the host vehicle. For example, during the sampling process, the traffic light is always green, and timing is started from the sampling start time, and the remaining time of the green light is calculated by subtracting the timing time from the green duration 301. Likewise, the remaining duration of other traffic lights can be inferred through the above-mentioned control logic relationship, and will not be described herein.
It will be appreciated that the high-precision map, which is limited to the host vehicle, is offline and the latest data cannot be downloaded from the server, resulting in inconsistent stored green light duration 301, yellow light duration 302, and red light duration 303 with the actual situation.
In one embodiment, the traffic light changes light at least twice during the continuous sampling of the traffic light signal by the host vehicle. For example, a traffic light changes from a red light to a green light and then from a green light to a yellow light. In this case, the green light duration of the actual situation may be calculated by, for example, kalman filtering, and the stored green light duration 301 may be corrected, so as to obtain a more accurate green light duration.
According to some embodiments, the vehicle is able to directly obtain the remaining duration of the traffic light display for each color. Some traffic lights are equipped with a timing unit that performs a calculation of the remaining duration that the traffic light displays each color, i.e., the time that each color will still last. And, the timing unit can send information of the remaining time period to the vehicle. For example, a V2X platform may be used to achieve the effect that the vehicle directly obtains the remaining duration of the display of each color of the traffic light, but is not limited thereto, and details thereof will not be repeated herein.
According to some embodiments, further comprising: acquiring the current position of a first obstacle vehicle; and in response to determining that the first obstacle vehicle is located on a lane controlled by the second traffic light, correcting a predicted result of the trained motion model on the travel track of the first obstacle vehicle based on the determined control signal of the second traffic light and the travel information of the first obstacle vehicle to determine a predicted travel track of the first obstacle vehicle.
In this disclosure, an obstacle vehicle may be understood as other participating vehicles in road traffic.
The existing motion models for predicting the vehicle driving track are mostly end-to-end models, i.e. based on a plurality of input variables, the prediction results are obtained through a black box-like process, and the prediction results may not be sufficiently direct or accurate for the host vehicle. The present disclosure makes further adjustments based on the motion model, i.e., based on the motion formula, predicts whether the vehicle can go beyond or stop on the stop line, and makes corrections to the predicted results of the motion model based on this result.
Illustratively, in the present disclosure, the prediction results are constrained using the factor of acceleration, enabling the prediction results to better fit the host vehicle.
According to some embodiments, the determining the predicted travel track of the first obstacle vehicle comprises: responsive to determining that the control signal of the second traffic light is a red light, and responsive to the prediction of the trained motion model indicating that the first obstacle vehicle cannot stop before the corresponding stop line, obtaining a current speed of the first obstacle vehicle and a range from the corresponding stop line; determining an acceleration value required for stopping the first obstacle vehicle before the corresponding stop line; and modifying the prediction result in response to determining that the acceleration value is within a preset threshold range, the modified prediction result indicating that the first obstacle vehicle can stop before the corresponding stop line.
The method and the device avoid the situation that the host vehicle misjudges the obstacle vehicle as the illegal vehicle to take the risk avoidance measures, such as emergency braking, so that the comfort of automatic driving is improved.
Illustratively, the obstacle vehicle is traveling straight and the straight traffic light is a red light. For an offending obstacle vehicle (e.g., a vehicle that may run a red light) predicted by the motion model, calculating how much deceleration is needed may stop the obstacle vehicle before the stop line, based on the speed and position information of the obstacle vehicle at the current time. If the calculated deceleration is-2 m/s 2, within the normal deceleration-3 m/s 2, the deceleration is normal, and the obstacle vehicle is corrected to a normal vehicle (i.e., can stop before the stop line of the red light). As one example, a preset threshold range for acceleration values may be determined in accordance with industry standards for vehicle deceleration in the prior art.
According to some embodiments, the determining the predicted travel track of the first obstacle vehicle comprises: responsive to determining that the control signal of the second traffic light is a green light and that the second duration that the green light will remain is less than a first threshold, and responsive to the prediction of the trained motion model indicating that the first obstacle vehicle cannot cross a respective stop line, obtaining a current speed of the first obstacle vehicle and a range from the respective stop line; determining an acceleration value required by the first obstacle vehicle to traverse the corresponding stop line within the second time period; and modifying the prediction result in response to determining that the acceleration value is within a preset threshold range, the modified prediction result indicating that the first obstacle vehicle can cross the corresponding parking line.
The method and the device avoid that the host vehicle misjudges the obstacle vehicle as stopping before stopping the vehicle on line without taking any emergency measures, and improve the safety of automatic driving.
Illustratively, the obstacle vehicle is traveling straight and the straight traffic light is a green light. And for a normal obstacle vehicle predicted by the motion model, calculating the acceleration required by the obstacle vehicle according to the speed and the position information of the obstacle vehicle at the current moment so as to enable the obstacle vehicle to cross the stop line. If the calculated acceleration is 2m/s 2, and within the normal acceleration of 3m/s 2, the obstacle vehicle is corrected to be an offending vehicle in consideration of the situation that the obstacle vehicle may accelerate. As one example, the first threshold may be set to 5 seconds.
Illustratively, the obstacle vehicle turns left, the left turn traffic light is green, and the green light remains for a small period of time (e.g., 5 seconds). In this case, the acceleration correction prediction result is used to consider whether or not the obstacle vehicle can pass through the stop line before the green light ends by accelerating. The speed and the position information of the obstacle vehicle at the current moment are used for calculating the acceleration which is needed to enable the obstacle vehicle to cross the stop line. If the calculated acceleration is within the acceleration capability range of the obstacle vehicle, the obstacle vehicle is judged to accelerate, and the obstacle vehicle crosses the parking line before the green light is finished.
In another embodiment, the obstacle vehicle turns left, the left turn traffic light is green, and the green light remains longer. In this case, even if the traveling speed of the obstacle vehicle is reduced, the stop line can be crossed before the green light ends. It will be appreciated that low speed travel of an obstacle vehicle is more advantageous to improve the travel safety of the vehicle during cornering, and therefore, in the event that the remaining duration of the green light is such as to ensure that the vehicle crosses the stop line, the travel speed is more prone to be reduced to improve safety. The maximum deceleration of the obstacle vehicle is calculated on the premise that the obstacle vehicle can cross the stop line. If the calculated maximum deceleration is greater than the current deceleration of the obstacle vehicle in the real situation, the deceleration of the obstacle vehicle is corrected to the calculated maximum deceleration. If the calculated maximum deceleration is smaller than the current deceleration of the obstacle vehicle in the real situation, the deceleration of the obstacle vehicle is not corrected.
The method is not limited to left turn, but includes right turn.
It can be understood that the situation that the obstacle vehicle is going straight and the straight traffic light is a yellow light is processed according to the situation that the obstacle vehicle is going straight and the straight traffic light is a green light. Similarly, when the obstacle vehicle turns (for example, left turn light and right turn light) and the turning traffic light is red light, the obstacle vehicle is processed according to the situation that the obstacle vehicle is traveling straight and the straight traffic light is red light. In addition, when the obstacle vehicle turns (for example, left turn and right turn lights) and the turning traffic light is a yellow light, the obstacle vehicle is processed according to the situation that the obstacle vehicle turns left, the left turn traffic light is a green light, and the remaining duration of the green light is less, and the description thereof is omitted.
According to some embodiments, further comprising: and determining a driving strategy of the main vehicle based on the predicted driving track of the first obstacle vehicle. And various possible results of the driving of the obstacle vehicle are considered, so that the driving strategy of the main vehicle is more accurate.
Fig. 4 illustrates a scene graph in which a traffic intersection 400 according to an embodiment of the present disclosure may be implemented.
As shown in fig. 4, the traffic intersection 400 includes a first traffic light 401, a second traffic light 402, and a third traffic light 403. It will be appreciated that the host vehicle may obtain topology information of each traffic light in the traffic intersection 400 through a high-precision map, for example, the first traffic light 401 and the third traffic light 403 are opposite to each other across the traffic intersection 400, and the second traffic light 402 is located on the right side of the first traffic light 401.
Fig. 5A and 5B illustrate scene diagrams in which host-vehicle travel according to embodiments of the present disclosure may be implemented.
As shown in fig. 5A, the host vehicle 501 is making a left turn. The host-vehicle 501 obtains information through a high-precision map, including: traffic lights 503 and corresponding lanes 504; traffic lights 502 and corresponding lanes 505.
As shown in fig. 5A, the host vehicle 501 is located on the extension of the lane 505 before turning left. The angle between the host vehicle 501 and the lane 505 is smaller than that of the lane 504, and thus it is determined that the host vehicle 501 is controlled by the traffic light 502. For example, it may be determined that the host-vehicle 501 is controlled by the traffic-light 502 when the angle between the host-vehicle 501 and the lane 505 is less than a threshold of 30 degrees or 45 degrees. It is understood that other angular thresholds are also contemplated and are not limited herein.
As shown in fig. 5B, the host vehicle 501 is still located on the extension of the lane 505 in the left turn. The angle between the host vehicle 501 and the lane 504 is smaller than that of the lane 505, and thus it is determined that the host vehicle 501 is not controlled by the traffic light 502 but by the traffic light 503. For example, it may be determined that the host vehicle 501 is controlled by the traffic light 503 when the angle between the host vehicle 501 and the lane 504 is smaller than a threshold of 30 degrees or 45 degrees. It is understood that other angular thresholds are also contemplated and are not limited herein.
According to an aspect of the present disclosure, there is provided an automatic driving apparatus 600 including: an information acquisition unit 601 configured to acquire travel information of a host vehicle, wherein the travel information includes a planned travel locus and a vehicle body orientation of the host vehicle; a first determining unit 602 configured to determine a lane controlled by a first traffic light located in front of the host vehicle in response to acquiring a control signal of the first traffic light; and a second determining unit 603 configured to determine whether to determine a driving strategy of the host vehicle based on the control signal, based on an angle between the vehicle body orientation and an extending direction of the lane, in response to determining that the host vehicle is located on the lane based on the planned driving trajectory.
According to embodiments of the present disclosure, there is also provided an electronic device, a readable storage medium and a computer program product.
According to an embodiment of the present disclosure, there is also provided an autonomous vehicle including the above-described electronic device.
According to another aspect of the disclosure, there is further provided an edge computing device, optionally, the edge computing device may further include a communication component, and the electronic device may be integrally integrated with the communication component or may be separately provided. The electronic device may acquire data of the road side sensing device (such as a road side camera), for example, pictures and videos, so as to perform image video processing and data calculation, and then transmit the processing and calculation results to the cloud control platform via the communication component.
Optionally, the edge computing device may also be a roadside computing unit (Road Side Computing Unit, RSCU). Optionally, the electronic device may also have a perceived data acquiring function and a communication function, for example, an AI camera, and the electronic device may directly perform image video processing and data calculation based on the acquired perceived data, and then transmit the processing and calculation results to the cloud control platform.
Optionally, the cloud control platform performs processing at the cloud end to perform image video processing and data calculation, and the cloud control platform may also be referred to as a vehicle-road collaborative management platform, a V2X platform, a cloud computing platform, a central system, a cloud server, and the like.
Referring to fig. 7, a block diagram of an electronic device 700 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the electronic device 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data required for the operation of the electronic device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the electronic device 700 are connected to the I/O interface 705, including: an input unit 706, an output unit 707, a storage unit 708, and a communication unit 709. The input unit 706 may be any type of device capable of inputting information to the electronic device 700, the input unit 706 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit 707 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 708 may include, but is not limited to, magnetic disks, optical disks. The communication unit 709 allows the electronic device 700 to exchange information/data with other devices through computer networks, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the respective methods and processes described above, such as an automatic driving method. For example, in some embodiments, the autopilot method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 700 via the ROM 702 and/or the communication unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the autopilot method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the autopilot method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (12)

1. An autopilot method comprising:
acquiring running information of a main vehicle, wherein the running information comprises a planned running track and a vehicle body orientation of the main vehicle;
Determining a lane controlled by a first traffic light located in front of the host vehicle in response to acquiring a control signal of the first traffic light; and
And determining whether to determine a driving strategy of the main vehicle based on the control signal based on the included angle between the vehicle body orientation and the extending direction of the lane in response to determining that the main vehicle is positioned on the lane based on the planned driving track.
2. The method of claim 1, wherein the determining whether to determine a driving strategy of the host vehicle based on the control signal comprises:
and determining that the driving strategy of the host vehicle is not determined based on the control signal in response to the included angle being greater than a threshold.
3. The method of claim 1 or 2, further comprising:
In response to determining that a second traffic light is arranged in a traffic intersection where the first traffic light is located, determining a control logic relationship between the first traffic light and the second traffic light based on a topological structure between a lane controlled by the first traffic light and a lane controlled by the second traffic light;
Determining a control signal of the second traffic light based on the control logic relationship and the control signal of the first traffic light; and
And determining the driving strategy of the host vehicle based on the determined control signal of the second traffic light.
4. The method of claim 3, wherein the determining the control signal for the second traffic lamp comprises:
Determining a first duration for which the control signal of the first traffic light will continue; and
And determining a second duration for which the control signal of the second traffic light is still continuous based on the first duration and the control logic relationship.
5. The method of claim 4, further comprising:
acquiring the current position of a first obstacle vehicle; and
And in response to determining that the first obstacle vehicle is located on a lane controlled by the second traffic light, correcting a predicted result of the trained motion model on the running track of the first obstacle vehicle based on the determined control signal of the second traffic light and the running information of the first obstacle vehicle to determine a predicted running track of the first obstacle vehicle.
6. The method of claim 5, wherein the determining the predicted travel track of the first obstacle vehicle comprises:
Responsive to determining that the control signal of the second traffic light is a red light, and responsive to the prediction of the trained motion model indicating that the first obstacle vehicle cannot stop before the corresponding stop line, obtaining a current speed of the first obstacle vehicle and a range from the corresponding stop line;
Determining an acceleration value required for stopping the first obstacle vehicle before the corresponding stop line; and
And in response to determining that the acceleration value is within a preset threshold range, modifying the prediction result, the modified prediction result indicating that the first obstacle vehicle can stop before the corresponding stop line.
7. The method of claim 5, wherein the determining the predicted travel track of the first obstacle vehicle comprises:
responsive to determining that the control signal of the second traffic light is a green light and that the second duration that the green light will remain is less than a first threshold, and responsive to the prediction of the trained motion model indicating that the first obstacle vehicle cannot cross a respective stop line, obtaining a current speed of the first obstacle vehicle and a range from the respective stop line;
Determining an acceleration value required by the first obstacle vehicle to traverse the corresponding stop line within the second time period; and
And in response to determining that the acceleration value is within a preset threshold range, modifying the prediction result, the modified prediction result indicating that the first obstacle vehicle can cross the corresponding parking line.
8. The method of any of claims 5 to 7, further comprising:
And determining a driving strategy of the main vehicle based on the predicted driving track of the first obstacle vehicle.
9. An autopilot device comprising:
An information acquisition unit configured to acquire travel information of a host vehicle, wherein the travel information includes a planned travel locus and a vehicle body orientation of the host vehicle;
A first determination unit configured to determine a lane controlled by a first traffic light located in front of the host vehicle in response to acquiring a control signal of the first traffic light; and
And a second determination unit configured to determine whether to determine a driving strategy of the host vehicle based on the control signal, based on an angle between the vehicle body orientation and an extending direction of the lane, in response to determining that the host vehicle is located on the lane based on the planned driving trajectory.
10. An electronic device, comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein the method comprises the steps of
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
11. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-8.
12. An autonomous vehicle comprising the electronic device of claim 10.
CN202111640239.1A 2021-12-29 2021-12-29 Automatic driving method, device, vehicle, storage medium and product Active CN114212108B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111640239.1A CN114212108B (en) 2021-12-29 2021-12-29 Automatic driving method, device, vehicle, storage medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111640239.1A CN114212108B (en) 2021-12-29 2021-12-29 Automatic driving method, device, vehicle, storage medium and product

Publications (2)

Publication Number Publication Date
CN114212108A CN114212108A (en) 2022-03-22
CN114212108B true CN114212108B (en) 2024-07-09

Family

ID=80706776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111640239.1A Active CN114212108B (en) 2021-12-29 2021-12-29 Automatic driving method, device, vehicle, storage medium and product

Country Status (1)

Country Link
CN (1) CN114212108B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115257807B (en) * 2022-07-27 2024-01-30 武汉大学 Urban road scene automatic driving decision-making method and device based on knowledge graph
CN115331471B (en) * 2022-08-10 2024-07-09 阿波罗智联(北京)科技有限公司 V2X-based intelligent navigation scheduling method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110542931A (en) * 2018-05-28 2019-12-06 北京京东尚科信息技术有限公司 traffic light detection method and device, electronic equipment and computer readable medium
CN110706494A (en) * 2019-10-30 2020-01-17 北京百度网讯科技有限公司 Control method, device, equipment and storage medium for automatic driving vehicle

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007219743A (en) * 2006-02-15 2007-08-30 Denso Corp Traveling control system for automobile
JP6262618B2 (en) * 2014-08-12 2018-01-17 株式会社ゼンリン Driving support system, data structure
CN106408975B (en) * 2016-06-17 2018-10-19 京东方科技集团股份有限公司 Vehicle travels prediction technique, device and vehicle intelligent system
EP3680876A1 (en) * 2019-01-08 2020-07-15 Visteon Global Technologies, Inc. Method for planning trajectory of vehicle
US20210166145A1 (en) * 2019-12-02 2021-06-03 Lyft, Inc. Leveraging Traffic Patterns to Understand Traffic Rules
CN111002984A (en) * 2019-12-24 2020-04-14 北京汽车集团越野车有限公司 Automatic driving method and device, vehicle and automatic driving equipment
CN113257019B (en) * 2020-02-11 2022-07-15 阿波罗智联(北京)科技有限公司 Traffic light signal control method, device, equipment and storage medium
CN111380555A (en) * 2020-02-28 2020-07-07 北京京东乾石科技有限公司 Vehicle behavior prediction method and device, electronic device, and storage medium
CN111289008B (en) * 2020-04-28 2021-04-13 南京维思科汽车科技有限公司 Local path planning method for unmanned vehicle
CN112614359B (en) * 2020-12-21 2022-06-28 阿波罗智联(北京)科技有限公司 Traffic control method and device, road side equipment and cloud control platform

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110542931A (en) * 2018-05-28 2019-12-06 北京京东尚科信息技术有限公司 traffic light detection method and device, electronic equipment and computer readable medium
CN110706494A (en) * 2019-10-30 2020-01-17 北京百度网讯科技有限公司 Control method, device, equipment and storage medium for automatic driving vehicle

Also Published As

Publication number Publication date
CN114212108A (en) 2022-03-22

Similar Documents

Publication Publication Date Title
JP7355877B2 (en) Control methods, devices, electronic devices, and vehicles for road-cooperative autonomous driving
US20190317513A1 (en) Sensor aggregation framework for autonomous driving vehicles
CN110641472A (en) Safety monitoring system for autonomous vehicle based on neural network
CN114212108B (en) Automatic driving method, device, vehicle, storage medium and product
US20230047404A1 (en) Driver assistance system and method
CN114758502B (en) Dual-vehicle combined track prediction method and device, electronic equipment and automatic driving vehicle
US20200401149A1 (en) Corner case detection and collection for a path planning system
CN115092130A (en) Vehicle collision prediction method, device, electronic apparatus, medium, and vehicle
CN115265537A (en) Navigation system with traffic state detection mechanism and method of operation thereof
CN114771533A (en) Control method, device, equipment, vehicle and medium for automatic driving vehicle
CN114394111B (en) Lane changing method for automatic driving vehicle
CN116776151A (en) Automatic driving model capable of performing autonomous interaction with outside personnel and training method
CN117035032A (en) Method for model training by fusing text data and automatic driving data and vehicle
CN116880462A (en) Automatic driving model, training method, automatic driving method and vehicle
CN115235487B (en) Data processing method, device, equipment and medium
CN115583243B (en) Method for determining lane line information, vehicle control method, device and equipment
CN114333368B (en) Voice reminding method, device, equipment and medium
CN115171392B (en) Method for providing early warning information for vehicle and vehicle-mounted terminal
CN114179834B (en) Vehicle parking method, device, electronic equipment, medium and automatic driving vehicle
CN116434041B (en) Mining method, device and equipment for error perception data and automatic driving vehicle
CN114379588B (en) Inbound state detection method, apparatus, vehicle, device and storage medium
CN114604241A (en) Vehicle driving risk assessment method and device, electronic equipment and edge computing equipment
US20230024799A1 (en) Method, system and computer program product for the automated locating of a vehicle
CN117724361A (en) Collision event detection method and device applied to automatic driving simulation scene
CN116991157A (en) Automatic driving model with human expert driving capability, training method and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant