WO2022247733A1 - 控制方法和装置 - Google Patents

控制方法和装置 Download PDF

Info

Publication number
WO2022247733A1
WO2022247733A1 PCT/CN2022/093988 CN2022093988W WO2022247733A1 WO 2022247733 A1 WO2022247733 A1 WO 2022247733A1 CN 2022093988 W CN2022093988 W CN 2022093988W WO 2022247733 A1 WO2022247733 A1 WO 2022247733A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
pedestrian
request information
action
information
Prior art date
Application number
PCT/CN2022/093988
Other languages
English (en)
French (fr)
Inventor
刘航
隋琳琳
夏媛
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP22810460.0A priority Critical patent/EP4331938A1/en
Publication of WO2022247733A1 publication Critical patent/WO2022247733A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/005Traffic control systems for road vehicles including pedestrian guidance indicator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/26Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic
    • B60Q1/50Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/26Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic
    • B60Q1/50Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking
    • B60Q1/503Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking using luminous text or symbol displays in or on the vehicle, e.g. static text
    • B60Q1/5035Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking using luminous text or symbol displays in or on the vehicle, e.g. static text electronic displays
    • B60Q1/5037Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking using luminous text or symbol displays in or on the vehicle, e.g. static text electronic displays the display content changing automatically, e.g. depending on traffic situation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/26Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic
    • B60Q1/50Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking
    • B60Q1/507Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking specific to autonomous vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/26Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic
    • B60Q1/50Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking
    • B60Q1/547Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking for issuing requests to other traffic participants; for confirming to other traffic participants they can proceed, e.g. they can overtake
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0027Planning or execution of driving tasks using trajectory prediction for other traffic participants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q2400/00Special features or arrangements of exterior signal lamps for vehicles
    • B60Q2400/50Projected symbol or information, e.g. onto the road or car body
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/402Type
    • B60W2554/4029Pedestrians
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4045Intention, e.g. lane change or imminent movement

Definitions

  • the present application relates to the technical field of automatic driving, and in particular to a control method and device.
  • the automatic driving system's determination and prediction of road participants' intentions is the basis of the automatic driving system's path planning, and it is also an important condition related to road safety.
  • the automatic driving system can predict the pedestrian's intention based on the direction of the pedestrian's movement, etc. For example, based on the direction of pedestrian movement, the automatic driving system can use machine learning algorithms to estimate the pedestrian's movement trajectory to predict the pedestrian's intention.
  • the above-mentioned prediction method based on pedestrian motion may have deviations in the prediction results of pedestrian intentions, resulting in reduced driving safety.
  • the embodiment of the present application provides a control method and device, which are applied in the field of automatic driving technology.
  • the method includes: controlling the target device in the vehicle to display the first request information in the target area, because the first request information is used to request pedestrians to perform the target action , the target action expresses the intention of pedestrians to participate in the road. Therefore, by identifying the actions made by pedestrians, the driving strategy of the vehicle can be determined according to the recognition results. In this way, even without the driver's participation, the vehicle can query the pedestrian's road participation intention through the first request information, so that in the automatic driving scene, the vehicle can realize the interaction with the pedestrian's intention, thereby obtaining the accurate pedestrian's road participation intention, and Get the appropriate driving strategy, so as to improve driving safety.
  • the embodiment of the present application provides a control method, the method includes: controlling the target device in the vehicle to display the first request information in the target area; wherein the first request information is used to request pedestrians to perform the target action, and the target action is used To express the pedestrian's road participation intention, the target area is within the pedestrian's visual range; recognize the actions made by the pedestrian; determine the driving strategy of the vehicle according to the recognition result.
  • the vehicle can query the pedestrian's road participation intention through the first request information, so that in the automatic driving scene, the vehicle can realize the interaction with the pedestrian's intention, thereby obtaining the accurate pedestrian's road participation intention, and Get the appropriate driving strategy, so as to improve driving safety.
  • the first request information is used to request the pedestrian to perform the target action, including: the first request information includes indication information for indicating the expected action, and the expected action is associated with the road participation intention of the pedestrian;
  • the result of determining the driving strategy of the vehicle includes: determining the driving strategy of the vehicle according to the expected action of the pedestrian. In this way, by instructing pedestrians to perform desired actions, the vehicle can determine the driving strategy of the vehicle according to the actions made by the pedestrians, thereby improving driving safety.
  • the expected action includes a first expected action and a second expected action
  • the first expected action is associated with the pedestrian's first road participation intention
  • the second expected action is associated with the pedestrian's second road participation intention
  • Determine the driving strategy of the vehicle according to the recognition result including: determining the driving strategy of the vehicle according to whether the pedestrian's action is the first expected action or the second expected action. In this way, the driving strategy of the vehicle can be determined according to whether the action made by the pedestrian is the first expected action or the second expected action, thereby improving driving safety.
  • the first request information is used to request pedestrians to perform target actions, including: the first request information includes indication information for indicating multiple expected actions, and the multiple expected actions are related to multiple road participations of pedestrians. Intention correlation; the method also includes: according to the action made by the pedestrian is not any one of the multiple expected actions, controlling the target device in the vehicle to display the second request information in the target area, the second request information is used to instruct the pedestrian to make the first action One way to participate in behavior. In this way, by directly telling pedestrians the first road participation behavior that needs to be done, the invalid waiting time can be effectively saved and the road traffic efficiency can be improved.
  • determining the driving strategy of the vehicle according to the recognition result includes: determining the driving strategy of the vehicle according to the first road participation behavior of the pedestrian. In this way, the vehicle can realize the intentional interaction with pedestrians, thereby improving driving safety.
  • the second request information includes one or more of text information, static graphic information, video information, or dynamic graphic information.
  • the first request information includes one or more of text information, static graphic information, video information, or dynamic graphic information.
  • the target device is a projection system
  • the target area is an area outside the vehicle.
  • the target area is the ground
  • controlling the target device in the vehicle to display the first request information in the target area includes: controlling the projection system to project the first request information on the ground when the ground meets the projection conditions .
  • the target device is a display device
  • the target area is a display screen
  • controlling the target device in the vehicle to display the first request information in the target area includes: controlling the display device to display the first request information on the display screen.
  • the embodiment of the present application provides a control device, which can be used to perform the operations in the foregoing first aspect and any possible implementation manners of the first aspect.
  • the apparatus may include a module or unit configured to perform operations in the first aspect or any possible implementation of the first aspect.
  • it includes a control unit, an identification unit, and a processing unit.
  • control unit is configured to control the target device in the vehicle to display the first request information in the target area; wherein, the first request information is used to request the pedestrian to perform a target action, and the target action is used to express the road participation intention of the pedestrian, and the target The area is within the visible range of pedestrians; the recognition unit is used to recognize the actions made by pedestrians; the processing unit is used to determine the driving strategy of the vehicle according to the recognition result.
  • the first request information includes indication information for indicating an expected action, and the expected action is associated with the road participation intention; the processing unit is specifically configured to: determine the action according to the action made by the pedestrian as the expected action The driving strategy of the vehicle.
  • the expected action includes a first expected action and a second expected action
  • the first expected action is associated with the pedestrian's first road participation intention
  • the second expected action is associated with the pedestrian's second road participation intention
  • a processing unit specifically for: determining the driving strategy of the vehicle according to whether the action made by the pedestrian is the first expected action or the second expected action.
  • the first request information includes indication information for indicating multiple expected actions, and the multiple expected actions are associated with multiple road participation intentions; the control unit is further configured to: If the action is not any one of the multiple expected actions, the target device in the vehicle is controlled to display the second request information in the target area, and the second request information is used to instruct pedestrians to perform the first road participation behavior.
  • the processing unit is specifically configured to: determine the driving strategy of the vehicle according to the pedestrian's first road participation behavior.
  • the second request information includes one or more of text information, static graphic information, video information, or dynamic graphic information.
  • the first request information includes one or more of text information, static graphic information, video information, or dynamic graphic information.
  • the target device is a projection system
  • the target area is an area outside the vehicle.
  • the target area is the ground
  • the control unit is specifically configured to: control the projection system to project the first request information on the ground when the ground satisfies the projection condition.
  • the target device is a display device
  • the target area is a display screen
  • the processing unit is specifically configured to: control the display device to display the first request information on the display screen.
  • the embodiment of the present application provides a control device, the device includes a memory and a processor, the memory stores computer program instructions, and the processor executes the computer program instructions, so as to realize various possible functions of the first aspect and the first aspect Implement the method described in How to.
  • the embodiments of the present application provide a vehicle, the vehicle includes the device described in the second aspect of the claim and various possible implementation manners of the second aspect.
  • the vehicle further includes a perception system and a target device, where the target device is a projection system or a display device.
  • the embodiment of the present application provides a computer-readable storage medium, in which a computer program or instruction is stored, and when the computer program or instruction is run on the computer, the computer executes the computer program as described in the first aspect and the second aspect.
  • a method is described in various possible implementations of an aspect.
  • an embodiment of the present application provides a computer program product, which, when the computer program product runs on a processor, causes the control device to execute the methods described in the first aspect and various possible implementation manners of the first aspect.
  • an embodiment of the present application provides a control system, and the system includes: the second aspect and the apparatus described in various possible implementation manners of the second aspect.
  • the present application provides a chip or a chip system, the chip or chip system includes at least one processor and a communication interface, the communication interface and the at least one processor are interconnected through lines, and the at least one processor is used to run computer programs or instructions, To implement the first aspect and the methods described in various possible implementation manners of the first aspect.
  • the communication interface in the chip may be an input/output interface, a pin or a circuit, and the like.
  • the chip or the chip system described above in the present application further includes at least one memory, and instructions are stored in the at least one memory.
  • the memory may be a storage unit inside the chip, such as a register, a cache, etc., or a storage unit of the chip (eg, a read-only memory, a random access memory, etc.).
  • FIG. 1 is a schematic diagram of a pedestrian crossing a road provided by an embodiment of the present application
  • Fig. 2 is a schematic diagram of a vehicle intention reminder in a possible design
  • Fig. 3 is a schematic diagram of another vehicle intention reminder in a possible design
  • FIG. 4 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a vehicle-mounted projection interaction system provided by an embodiment of the present application.
  • FIG. 6 is a functional block diagram of a possible vehicle provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a computer system provided by an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a control method provided in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a first request information provided by an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of a control method provided in an embodiment of the present application.
  • Fig. 11 is a schematic diagram of an intention interaction provided by the embodiment of the present application.
  • FIG. 12 is a schematic flow chart of a control method provided in an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a second request information provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a second request information provided by an embodiment of the present application.
  • Fig. 15 is a schematic diagram of an intention interaction provided by the embodiment of the present application.
  • FIG. 16 is a schematic flow chart of a control method provided by an embodiment of the present application.
  • Fig. 17 is a schematic structural diagram of a control device provided by an embodiment of the present application.
  • Fig. 18 is a schematic structural diagram of another control device provided by the embodiment of the present application.
  • FIG. 19 is a schematic structural diagram of a chip provided by an embodiment of the present application.
  • words such as “first” and “second” are used to distinguish the same or similar items with basically the same function and effect.
  • the first value and the second value are only used to distinguish different values, and their sequence is not limited.
  • words such as “first” and “second” do not limit the number and execution order, and words such as “first” and “second” do not necessarily limit the difference.
  • “at least one” means one or more, and “multiple” means two or more.
  • “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B, which can mean: A exists alone, A and B exist simultaneously, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the contextual objects are an “or” relationship.
  • “At least one of the following" or similar expressions refer to any combination of these items, including any combination of single or plural items.
  • At least one item (piece) of a, b, or c can represent: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c can be single or multiple .
  • FIG. 1 is an example provided by the embodiment of this application.
  • the automatic driving system there may be no driver's participation, and the automatic driving system and pedestrians cannot interact with each other through gestures or actions like the driver and pedestrians. Therefore, in the determination and prediction of the intention of road traffic participants, it is particularly difficult to determine and predict the intention of pedestrians.
  • the automatic driving system can predict the intention of the road participant based on information such as the direction, speed, and road topology of the road participant. For example, the automatic driving system can predict the intention of pedestrians by estimating the trajectory of road participants based on information such as the direction, speed, and road topology of road participants using machine learning algorithms.
  • the above-mentioned prediction method based on the movement of road participants is suitable for situations where the road participants have certain movement trajectories and trends, and cannot predict the intentions of stationary road participants. Therefore, the prediction algorithm based on the movement of road participants may not be able to obtain effective input, and the prediction and determination of the intention of road participants cannot be accurately realized; moreover, the above-mentioned prediction method based on the movement of road participants may have a negative effect on Deviations in the prediction results of pedestrian intentions lead to a reduction in driving safety; moreover, deviations in the prediction results of pedestrian intentions may also lead to misjudgment scenarios, which may lead to danger and even traffic safety accidents.
  • the self-driving system can use the light signal projection device to project early warning information on the ground based on the car light technology to remind road participants; among them, the light signal
  • the projection device includes a light source, a light-transmitting mirror and a reflector. The light beam diverged from the light source is reflected by the reflector, and then passes through different shapes, symbols, patterns or text designed on the light-transmitting mirror, which can be displayed on the ground to remind road participants Information.
  • FIG. 2 is a schematic diagram of a vehicle intention reminder in a possible design.
  • the early warning information is reversing instruction information
  • the automatic driving system uses the optical signal projection device to send the reversing
  • the indication information is projected on the reversing area at the rear of the vehicle to remind the road participants behind the vehicle or passing behind; wherein, the light signal projection device can be installed at the rear of the vehicle, and the reversing area can be the rear area of the vehicle, stop line 1 and stop line 1,
  • the reversing instruction information can be text, shape or symbol, etc., and the text, shape or symbol is used to indicate that the vehicle is reversing or is about to reversing.
  • FIG. 3 is a schematic diagram of another vehicle intention reminder in a possible design.
  • the early warning information is the door opening instruction information, which is automatically
  • the driving system uses the light signal projection device to project the door opening instruction information on the ground to remind the road participants on the side of the vehicle; where the light signal projection device can be installed on the car door, and the door opening instruction information can be text, symbols or shapes. Words, symbols or shapes used to indicate that a door is opening or that a door is about to open.
  • the automatic driving system is based on an optical signal projection device, which realizes a one-way warning reminder to road participants, and cannot guarantee that road participants will notice the warning information, road participants understand the warning information, and road participants act according to the warning information etc. In this way, the automated driving system cannot accurately determine the intentions of road participants.
  • an embodiment of the present application provides a control method and device, which are applied in the field of automatic driving technology.
  • the method includes: controlling the display device in the vehicle to display the first request information in the target area, since the first request information is used to request pedestrians Execute the target action.
  • the target action expresses the intention of pedestrians to participate in the road. Therefore, by identifying the actions made by pedestrians, the driving strategy of the vehicle can be determined according to the recognition results. In this way, even without the driver's participation, the vehicle can also inquire about the pedestrian's road participation intention through the first request information, so that in the automatic driving scene, the vehicle can also realize the interaction with the pedestrian's intention and obtain an appropriate driving strategy, thereby improving driving efficiency. safety.
  • FIG. 4 is the present A schematic diagram of an application scenario provided by the embodiment of the application, as shown in Figure 4, the vehicle recognizes that the pedestrian is standing on the side of the road, but at the next moment, the vehicle cannot determine whether the pedestrian continues to stand on the side of the road or cross the road, therefore, The vehicle can display the first request information on the ground, and the pedestrian can take actions according to the first request information. In this way, even without the driver's participation, the vehicle can understand the pedestrian's road participation intention by recognizing the pedestrian's action, so that the vehicle realizes Interact with pedestrian intent.
  • FIG. 5 is a schematic diagram of a vehicle-mounted projection interaction system provided by an embodiment of the present application.
  • the system includes a decision-making system, a perception system and a projection system.
  • a control system hereinafter referred to as a projection system
  • the projection system includes a projection device; wherein, the system includes the interface between the decision system and the perception system, the interface between the decision system and the projection system, and the interface between the projection system and the projection device Interface.
  • the decision-making system can activate the projection device based on the information transmitted in the interface with the perception system, so that the projection system can instruct the projection device based on the information transmitted in the interface with the decision-making system
  • the device projects the request information, and further, the projection device may display the request information in the target area based on the information transmitted in the interface with the projection system; wherein the request information may include the first request information or the second request information.
  • the information transmitted in the interface between the decision-making system and the perception system can be embodied as: the decision-making system instructs the perception system to perceive the pedestrian information, the pedestrian information includes but not limited to tracking pedestrians, identifying pedestrians The action or the duration of the action of the pedestrian is identified; on the other hand, the transmitted information can be embodied as: the sensing action information input by the sensing system to the decision-making system, and the sensing action information includes but not limited to whether the action of the pedestrian is consistent with the Information such as expected action matching; where the action made by the pedestrian matches the expected action can be understood as the first expected action or the second expected action, and the mismatch between the action made by the pedestrian and the expected action can be understood as , the action taken by the pedestrian is not any one of the multiple expected actions, or the pedestrian does not perform the action.
  • the information transmitted in the interface between the decision-making system and the projection system can be embodied as: the request information determined by the decision-making system, that is, the first request information determined by the decision-making system, so that , the projection system can instruct the projection device based on the first request information determined by the decision-making system; since the information transmitted in the interface between the projection device and the projection system can be embodied as the first request information displayed in the target area, the projection device based on The information transmitted in the interface with the projection system may display the first request information in the target area; wherein, the first request information includes the projection content, the display position of the projection content, the duration of the projection content, the display angle of the projection content, the projection at least one of the display brightness of the content or the display color of the projected content.
  • Case 2 In the case that the action performed by the pedestrian does not match the expected action, that is, the action taken by the pedestrian is not any one of the multiple expected actions, the information transferred in the interface between the decision system and the projection system can reflect is: the switched request information determined by the decision-making system, that is, the second request information determined by the decision-making system, so that the projection system can instruct the projection device based on the second request information determined by the decision-making system;
  • the information transmitted in the interface can be embodied as the second request information displayed in the target area, therefore, the projection device can display the second request information in the target area based on the information transmitted in the interface with the projection system; wherein, the second request information Including but not limited to at least one of the display position of the projected content, the duration of the projected content, the displayed angle of the projected content, the displayed brightness of the projected content, or the displayed color of the projected content.
  • the information transmitted in the interface between the decision-making system and the perception system may include instructions instructing the perception system to perceive pedestrians, so that the perception system can perceive pedestrian information based on the instructions; the interface between the decision-making system and the projection system The information transmitted in the interface may also include an instruction to activate the projection system, so that the projection system can be activated based on the instruction; the information transmitted in the interface between the projection device and the projection system may include an instruction to instruct the projection device to perform projection, so that the projection The device may project the first request information or the second request information based on the instruction.
  • the information transmitted by the vehicle through the interface between the decision system and the perception system, the information transmitted through the interface between the decision system and the projection system, and the information transmitted through the interface between the projection system and the projection device can be displayed in the target area, so that the perception system can recognize the actions made by pedestrians according to the first request information, or the perception system can recognize the road participation made by pedestrians according to the second request information behavior, so that the decision-making system can determine the intention of pedestrians, and then the decision-making system can determine the driving strategy of the vehicle.
  • the vehicle can interact with pedestrians in two-way information, so that the vehicle can clarify the intention of the pedestrian, and then the vehicle can perform safe driving control and decision-making.
  • FIG. 6 is a functional block diagram of a possible vehicle 600 provided by the embodiment of the present application.
  • the vehicle 600 can be configured as a complete or partial In the automatic driving mode, the vehicle 600 can be a car, truck, motorcycle, bus, lawn mower, recreational vehicle, playground vehicle, construction equipment, tram, golf cart, train or trolley, etc., the embodiment of the present application Not specifically limited.
  • the vehicle 600 when the vehicle 600 is in a partially automatic driving mode, after the vehicle 600 determines the current state of the vehicle and its surrounding environment, the user operates the vehicle 600 based on the current state. For example, the vehicle 600 determines the possible behavior of pedestrians in the surrounding environment, the vehicle can control the target device in the vehicle to display the first request information in the target area according to the possible behavior of the pedestrian, and the pedestrian can take actions according to the first request information. After recognizing the actions made by pedestrians, the vehicle can notify the user of the pedestrian's road participation intention through voice, so that the user can perform operations on the vehicle related to the pedestrian's road participation intention.
  • the vehicle 600 can automatically perform driving-related operations. For example, the vehicle 600 determines the possible behavior of pedestrians in the surrounding environment, and controls the target device in the vehicle to display the first request information in the target area according to the possible behavior of pedestrians. The road participation intention of pedestrians, so that the vehicle can automatically perform operations related to the road participation intention of pedestrians.
  • vehicle 600 includes travel system 202 , sensor system 204 , control system 206 , one or more peripheral devices 208 , computer system 212 , power supply 210 , and user interface 216 .
  • vehicle 600 may include more or fewer subsystems, and each subsystem may include multiple elements. Wherein, each subsystem and component of the vehicle 600 may be interconnected by wire or wirelessly.
  • the traveling system 202 includes: an engine 218 , a transmission 220 , an energy source 219 and wheels 221 .
  • the sensor system 204 comprises several sensors sensing information about the environment surrounding the vehicle 600 .
  • the sensor system 204 may include: a positioning system 222 , an inertial measurement unit (inertial measurement unit, IMU) 224 , a millimeter wave radar 226 , a laser radar 228 and a camera 230 .
  • the positioning system 222 may be a global positioning system (global positioning system, GPS), and may also be a Beidou system or other positioning systems.
  • the positioning system 222 can be used to estimate the geographic location of the vehicle 600, and the IMU 224 is used to sense the position and orientation changes of the vehicle 600 based on inertial acceleration.
  • IMU 224 may be a combination accelerometer and gyroscope.
  • the sensor system 204 may further include: sensors of internal systems of the monitored vehicle 600 (eg, an in-vehicle air quality monitor, a fuel gauge and/or an engine oil temperature gauge, etc.). Sensor data from one or more of these sensors may be used to detect and identify objects and their corresponding characteristics (e.g., position, shape, orientation, and/or velocity, etc.) that are essential for autonomous safe operation of the vehicle 600 key function.
  • sensors of internal systems of the monitored vehicle 600 eg, an in-vehicle air quality monitor, a fuel gauge and/or an engine oil temperature gauge, etc.
  • Sensor data from one or more of these sensors may be used to detect and identify objects and their corresponding characteristics (e.g., position, shape, orientation, and/or velocity, etc.) that are essential for autonomous safe operation of the vehicle 600 key function.
  • the millimeter wave radar 226 may utilize radio signals to sense objects within the surrounding environment of the vehicle 600 .
  • the vehicle may use the millimeter wave radar 226 to track pedestrians, identify actions made by pedestrians, or identify the duration of actions made by pedestrians.
  • millimeter wave radar 226 may be used to sense the velocity and/or heading of an object in addition to sensing the object.
  • lidar 228 may utilize laser light to sense objects in the environment in which vehicle 600 is located.
  • lidar 228 may include one or more laser sources, a laser scanner, and one or more detectors, among other system components.
  • the camera 230 may be used to capture multiple images of the surrounding environment of the vehicle 600 .
  • the camera 230 can capture the environment data or image data around the vehicle, and the vehicle predicts the road participation intention of pedestrians based on the environment data or image data, so as to determine whether to control the target device in the vehicle to display the first request information in the target area, wherein , the camera 230 may be a static camera or a video camera.
  • the sensor system can be a perception system, so that pedestrians can be tracked through the millimeter-wave radar 226 or laser radar 228 in the sensor system, or actions made by pedestrians can be identified, thereby Obtain the information transmitted in the interface between the decision-making system and the perception system, that is, the perception action information.
  • control system 206 may include various elements in order to control the operation of vehicle 600 and its components.
  • control system 206 may include at least one of steering system 232 , accelerator 234 , braking unit 236 , computer vision system 240 , route control system 242 , obstacle avoidance system 244 , and projection control system 254 .
  • the control system 206 may additionally or alternatively include components other than those shown and described, or may reduce some of the components shown above.
  • the projection control system 254 may instruct the projection device, and further, the projection device projects the first request information or the second request information.
  • the steering system 232 is operable to adjust the heading of the vehicle 600 .
  • it can be a steering wheel system; the throttle 234 is used to control the operating speed of the engine 218 and thus the speed of the vehicle 600; the braking unit 236 is used to control the deceleration of the vehicle 600, and the braking unit 236 can use friction to reduce Slow Wheels 221.
  • the braking unit 236 can convert the kinetic energy of the wheels 221 into electric current, and the braking unit 236 can also take other forms to slow down the rotation speed of the wheels 221 so as to control the speed of the vehicle 600 .
  • computer vision system 240 may process and analyze images captured by camera 230 so that computer vision system 240 can identify objects and/or features of objects in the environment surrounding vehicle 600 .
  • the object and/or the feature of the object may include: a traffic signal, a road boundary or an obstacle.
  • Computer vision system 240 may use object recognition algorithms, structure from motion (SFM) algorithms, video tracking, and other computer vision techniques.
  • SFM structure from motion
  • computer vision system 240 may be used to map the environment, track objects, estimate the velocity of objects, and the like.
  • Route control system 242 may be used to determine a travel route for vehicle 600
  • obstacle avoidance system 244 may be used to identify, evaluate, and avoid or otherwise overcome potential obstacles in the environment of vehicle 600 , among other possibilities.
  • Vehicle 600 interacts with external sensors, other vehicles, other computer systems, or users through peripherals 208 .
  • peripherals 208 may include: wireless communication system 246 , onboard computer 248 , microphone 250 , and speaker 252 .
  • Wireless communication system 246 may, among other things, communicate wirelessly with one or more devices, either directly or via a communication network.
  • Computer system 212 may include at least one processor 213 that executes instructions 215 stored in data storage device 214 .
  • Computer system 212 may also be a plurality of computing devices that control individual components or subsystems of vehicle 600 in a distributed fashion.
  • the processor 213 may be any conventional processor, such as a commercially available central processing unit (central processing unit, CPU). Alternatively, the processor may be a special purpose device such as an application specific integrated circuit (ASIC) or other hardware based processor. In various aspects described herein, the processor 213 may be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein may be performed by a processor disposed within the vehicle, while other processes described herein may be performed by a remote processor, including taking the necessary steps to perform a single maneuver.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • the data storage device 214 may contain instructions 215 (eg, program logic instructions), which may be processed by the processor 213, so that the processor 213 performs various functions of the vehicle 600, including the functions described above .
  • Data storage 214 may also contain additional instructions, including sending data to, receiving data from, interacting with, and/or performing operations on, one or more of propulsion system 202, sensor system 204, control system 206, and peripherals 208. control instructions.
  • data storage device 214 may store data such as road maps, route information, the vehicle's position, direction, speed, and other such vehicle data, among other information. Such information may be used by the vehicle 600 and the computer system 212 during operation of the vehicle 600 in autonomous, semi-autonomous, and/or manual modes.
  • the user interface 216 is used to provide information to or receive information from a user of the vehicle 600 .
  • user interface 216 may include one or more input/output devices within set of peripheral devices 208 , such as wireless communication system 246 , on-board computer 248 , microphone 250 , and speaker 252 .
  • Computer system 212 may control functions of vehicle 600 based on input received from various subsystems (eg, travel system 202 , sensor system 204 , and control system 206 ) and from user interface 226 .
  • computer system 212 may utilize input from control system 206 in order to control steering system 232 to avoid obstacles detected by sensor system 204 and obstacle avoidance system 244 .
  • computer system 212 may provide control over many aspects of vehicle 600 and its subsystems.
  • one or more of these components described above may be installed separately from or associated with the vehicle 600 .
  • data storage device 214 may exist partially or completely separate from vehicle 600 .
  • the above-mentioned components may be coupled together in a wired and/or wireless manner.
  • the computing device may also provide instructions to modify the steering angle of the vehicle 600 so that the self-driving car follows a given trajectory and/or maintains close proximity to the self-driving car. Safe lateral and longitudinal distances of obstacles (eg, vehicles in adjacent lanes on the road).
  • obstacles eg, vehicles in adjacent lanes on the road.
  • FIG. 7 is a schematic structural diagram of a computer system 212 provided in an embodiment of the present application.
  • the computer system 212 includes at least one of a processor 213, a display adapter (video adapter) 107, a transceiver 123, a camera 155, a universal serial bus (universal serial bus, USB) port 125, and the like.
  • the transceiver 123 can send and/or receive radio communication signals
  • the camera 155 can capture static digital video images and dynamic digital video images.
  • the processor 213 is coupled to the system bus 105, the system bus 105 is coupled to the input/output (I/O) bus through the bus bridge 111, the I/O bus is coupled to the I/O interface 115, and the I/O bus is coupled to the I/O interface 115.
  • the O interface 115 can communicate with various I/O devices.
  • the I/O device can be an input device 117 (such as: a keyboard, a mouse, a touch screen, etc.) or a multimedia tray (media tray) 121 (for example, a compact disc read-only memory (CD-ROM) ), multimedia interface, etc.).
  • the interface connected to the I/O interface 115 may be a universal serial bus (universal serial bus, USB) interface.
  • the processor 213 may be one or more processors, each of which may include one or more processor cores; the processor 113 may be any conventional processor, including a reduced instruction set computer (reduced instruction set computing (RISC), complex instruction set computing (CISC), or a combination of the above.
  • RISC reduced instruction set computing
  • CISC complex instruction set computing
  • the processor may be a dedicated device such as an application specific integrated circuit (ASIC); or, the processor 213 may be a neural network processor or a combination of a neural network processor and the aforementioned traditional processors.
  • ASIC application specific integrated circuit
  • the computer system 212 can communicate with the software deployment server 149 through the network interface 129 .
  • network interface 129 may be a hardware network interface (eg, a network card).
  • the network 127 may be an external network (for example, the Internet), or an internal network (for example, Ethernet or a virtual private network (virtual private network, VPN)).
  • the network 127 may also be a wireless network (for example, a wireless-fidelity (wireless-fidelity, WiFi) network, a cellular network).
  • the application program 143 includes a program 147 related to controlling the automatic driving of a car and a program 148 related to projection.
  • the program 147 related to automatic driving may include a program for managing the interaction between the self-driving car and obstacles on the road, a program for controlling the route or speed of the self-driving car, a program for controlling the interaction between the self-driving car and other self-driving cars on the road, etc.; for example,
  • the projection-related program 148 may include a program for projecting the first request information or the second request information, and the like.
  • the vehicle can further control the route or speed of the vehicle through the automatic driving related program 147 .
  • Application 143 may reside on a system of software deploying server 149.
  • the computer system may download the application program 143 from the software deployment server 149 when the application program 143 needs to be executed.
  • the sensor 153 is associated with the computer system 212 , and the sensor 153 is used to detect the environment around the computer system 212 .
  • the sensor 153 can detect objects such as animals, automobiles, obstacles or pedestrian crossings; The brightness of the environment around the animal, the presence of other animals around the animal, etc.
  • the sensor may be a camera, an infrared sensor, a chemical detector, or a microphone.
  • the processor 213 shown in FIG. 7 may be a decision-making system, or the processor 213 may include a decision-making system, so that the vehicle can interact with pedestrians through the processor 213 .
  • the sensor 153 shown in FIG. 7 can be a perception system, and the vehicle can track pedestrians, identify actions made by pedestrians, or identify the duration of actions made by pedestrians through the sensor 153, and then the vehicle can obtain The information transmitted in the interface between the processor 213 and the sensor 153 , that is, the vehicle obtains the information transmitted in the interface between the decision system and the perception system.
  • the execution subject of the steps described below may be a vehicle, a chip in the vehicle, or a module in the vehicle, etc.
  • the execution subject is a module in the vehicle (hereinafter referred to as the first module) as an example
  • the first module may be a multi-domain controller (multi domain controller, MDC) or the like.
  • MDC multi domain controller
  • FIG. 8 is a schematic flowchart of a control method provided in an embodiment of the present application. As shown in FIG. 8, the following steps may be included:
  • the first module controls the target device in the vehicle to display the first request information in the target area.
  • the target device can be a device pre-installed in the vehicle by the manufacturer, or a device installed in the vehicle by the user himself.
  • the first module displays the first request information in the target area by controlling the target device in the vehicle , the control process can be triggered automatically or manually, so that the vehicle can inquire about the road participation intention of pedestrians based on the first request information; it can be understood that the installation location of the target device can be set according to the actual application scenario,
  • the embodiment of this application is not limited.
  • the target area is within the visible range of pedestrians.
  • the target area is the ground area between the vehicle and pedestrians.
  • the first module controls the target device in the vehicle to display the first request information in the target area
  • the pedestrian can know the first request information displayed by the vehicle in the target area, so that the pedestrian can perform actions related to his road participation intention based on the first request information, so that the pedestrian expresses his own road participation intention; it can be understood that the target area
  • the specific content may also be set according to the actual application scenario, which is not limited in this embodiment of the present application.
  • the first request information is used to request the pedestrian to perform the target action. Since the target action is used to express the road participation intention of the pedestrian, in this way, the first module can know the road participation of the pedestrian at the next moment based on the first request information. Intention, that is, whether the pedestrian continues to walk or stop; among them, the target action can be a preset action, which is mutually known between the vehicle and the pedestrian, so that it can be guaranteed that after the pedestrian makes an action, the vehicle can Understand road engagement intentions expressed by actions made by pedestrians. For example, a pedestrian raising his left hand means that the pedestrian will walk at the next moment, and a pedestrian raising his right hand means that the pedestrian will stop at the next moment.
  • pedestrians can refer to pedestrians walking on the side of the road, pedestrians intending to cross the road, or pedestrians riding bicycles; it can be understood that the specific content of the target action can also be set according to the actual application scene, which is not limited by the embodiment of the present application. .
  • the first request information may include one or more of text information, static graphic information, video information or dynamic graphic information; it can be understood that the specific content of the first request information may also be based on the actual
  • the application scenario setting is not limited in this embodiment of the application.
  • the text information refers to the text of pedestrians walking or pedestrians not walking, and it is also displayed next to the text of pedestrians walking or pedestrians not walking to request the target action that pedestrians need to perform, for example, the text displayed next to the text of pedestrians walking It means raising the left hand, and the text next to the words "Pedestrians do not walk” means raising the right hand.
  • the static graphic information refers to the static graphic of pedestrians walking or pedestrians not walking.
  • FIG. 9 is a schematic diagram of a first request information provided in the embodiment of this application.
  • the static figure of pedestrians walking is an arrow symbol in the forward direction
  • the static figure of pedestrians not walking is an arrow symbol in the backward direction, and it is also next to the arrow symbol in the forward direction or the arrow symbol in the backward direction.
  • the target action that the requesting pedestrian needs to perform is shown, for example, a raised left hand is shown next to an arrow symbol in a forward direction, and a right hand is shown next to an arrow symbol in a backward direction.
  • the dynamic graphics information refers to the dynamic graphics of pedestrians walking or pedestrians not walking, for example, in front of pedestrians, the dynamic graphics of pedestrians walking is an arrow symbol advancing step by step, and the dynamic graphics of pedestrians not walking are A step backward arrow symbol that also shows a target action that requests pedestrians to perform next to a step forward arrow symbol or a step backward arrow symbol, for example, a step forward arrow
  • the left hand is shown next to the symbol, and the right hand is shown next to the step-by-step arrow symbol.
  • the static graphic information or the dynamic graphic information can be the passage indication information in the traffic sign, for example, a straight ahead sign or a left turn sign, etc., through the passage indication information, pedestrians can understand the request in the first request information. the target action to be performed.
  • the video information refers to the video of pedestrians walking or not walking.
  • the video of pedestrians walking is a dynamic picture
  • the video of pedestrians not walking is a still picture.
  • the target action that the pedestrian needs to perform is also displayed next to the still picture, for example, the left hand is displayed next to the dynamic picture, and the right hand is displayed next to the still picture.
  • the right hand or left hand displayed next to the text information, static graphic information, video information or dynamic graphic information can be the text of raising the left hand or right hand, or it can be the text of raising the left hand or raising the left hand.
  • the dynamic picture of the right hand is not limited in the embodiment of this application.
  • raising the left hand or raising the right hand is an example of the target action that the requested pedestrian needs to perform. It can also be set for the pedestrian to move to the left or to the right, or it can be set in other ways. limited.
  • raising the left hand indicates that pedestrians are walking and raising the right hand indicates that pedestrians are not walking is an example. Raising the left hand indicates that pedestrians are not walking and raising the right hand indicates that pedestrians are walking. It can also be set in other ways. This application implements Examples are not limited.
  • the first module controls the target device in the vehicle to display the first request information in the target area.
  • a possible implementation is: based on a certain trigger condition, the first module can control the target device in the vehicle to display the first request information in the target area. First request information.
  • the first module can control the target device to display the first request information on the ground area between the vehicle and the pedestrian, so that the pedestrian can act based on the first request information. actions, so that the first module can know pedestrians' road participation intentions according to the actions made by pedestrians; it can be understood that the specific content of trigger conditions can also be set according to actual application scenarios, which are not limited in this embodiment of the application.
  • S802 The first module recognizes actions made by pedestrians.
  • the first module may identify actions made by pedestrians based on sensors.
  • the sensor on the vehicle can obtain the distance-velocity map corresponding to the pedestrian through the method of two-dimensional Fourier transform of the distance-velocity, and the sensor can detect the distance-velocity map through the constant false alarm algorithm to obtain the point cloud target detection result corresponding to the pedestrian , and then obtain the angle of the point cloud target of the pedestrian through the angle Fourier transform, the sensor clusters the point cloud target into targets through the clustering algorithm, so that the sensor uses the tracking algorithm to track the pedestrian based on the multi-frame data, and further, the sensor passes Time-frequency analysis methods such as short-time Fourier transform or wavelet transform analyze the multi-frame data of pedestrians, so that the actions made by pedestrians can be recognized.
  • Time-frequency analysis methods such as short-time Fourier transform or wavelet transform analyze the multi-frame data of pedestrians, so that the actions made by pedestrians can be recognized.
  • the implementation of the first module to identify the actions of pedestrians can also be set according to actual application scenarios, which is not limited in this embodiment of the present application.
  • S803 The first module determines the driving strategy of the vehicle according to the recognition result.
  • the result of recognition may be that the pedestrian has expressed his intention to participate in the road, or the result of the recognition may be that the pedestrian has not expressed his intention to participate in the road.
  • the first module can determine Different driving strategies.
  • the first module may determine the driving strategy of the vehicle based on the pedestrian's road participation intention.
  • the first module when the first module recognizes that a pedestrian is crossing the road, the first module can display the first request information in the target area, and the first request information is used to request the pedestrian to perform the target action. Next to the text on the left hand, it shows go, and next to the text on the right hand, it shows no go. Go means that the pedestrian continues to cross the road, and no go means that the pedestrian stops on the road.
  • the first module can determine that the intention of the pedestrian is to stop on the road, or it can be understood that the driving strategy of the vehicle determined by the first module is to continue driving; it can be understood that the first module determines the implementation of the driving strategy of the vehicle, It can be set according to actual application scenarios, and is not limited in this embodiment of the application.
  • the first module can execute a preset strategy, so that the first module is based on the preset strategy, A driving strategy of the vehicle may be determined.
  • the first module recognizes pedestrians crossing the road, the first module can display the first request information in the target area, and the text of the first request information used to request the pedestrian to perform the target action can be to raise the left hand or the right hand, and when raising the left hand Go is displayed next to the text, and No Go is displayed next to the text with the right hand raised. Go means that the pedestrian continues to cross the road, and No Go means that the pedestrian stops on the road.
  • the first module can implement the preset strategy of the vehicle stopping on the road, and the vehicle will not start to drive until the pedestrian crosses the road; it can be understood that the first module determines the driving strategy of the vehicle
  • the implementation manner of can also be set according to the actual application scenario, which is not limited in this embodiment of the present application.
  • the first module controls the target device in the vehicle to display the first request information in the target area, so that pedestrians can take actions based on the first request information, so that the first module passes the recognition
  • the actions of pedestrians can determine the driving strategy of the vehicle, so that in the automatic driving system, even without the participation of the driver, the vehicle can interact with pedestrians, thereby avoiding traffic accidents and improving driving safety.
  • the first module can control the target device in the vehicle to display the first request information in the target area based on the display method of the first request information.
  • the display mode of the first request information includes one or more of the following: the display position of the first request information, the display angle of the first request information, the display brightness of the first request information, the display color of the first request information Or display the duration of the first request information; it can be understood that the specific content of the display manner of the first request information can also be set according to the actual application scenario, which is not limited in this embodiment of the present application.
  • the display location of the first request information refers to the specific location where the first request information is displayed, and the location may be an area outside the vehicle, for example, the ground, a building, or the body of the vehicle, etc.
  • the body of the vehicle may include a front windshield Glass, rear windshield or at least one of the windows, etc., the vehicle may refer to the vehicle or other vehicles; it can be understood that the specific content of the display position of the first request information may also be based on
  • the setting of the actual application scene is not limited in this embodiment of the application.
  • the vehicle since the pedestrian is in front of the vehicle, when the vehicle displays the first request information on the rear windshield, the vehicle also needs to display the first request information on the front windshield, and the first request information displayed on the front windshield
  • the information is used to request pedestrians to make target actions, and the first request information displayed on the rear windshield can be used to remind other vehicles behind the vehicle; it can be understood that the vehicle displays the first request information on the front windshield and rear windshield
  • the specific implementation manner is not limited in this embodiment of the application.
  • the display angle of the first request information refers to the angle at which the first request information is displayed. From the perspective of the vehicle, the display angle of the first request information may be 60 degrees to the right in the direction directly ahead of the vehicle, etc., and from the perspective of pedestrians. , the display angle of the first request information can be directly in front of pedestrians, etc.; it can be understood that the specific content of the display angle of the first request information can also be set according to the actual application scenario, which is not limited in the embodiment of the present application.
  • the display brightness of the first request information refers to the brightness of displaying the first request information.
  • the brightness value can be 50. In this way, the brightness can highlight the first request information, so that pedestrians can see the first request information in time.
  • Request information it can be understood that the specific value of the display brightness of the first request information can be set according to the actual application scenario, which is not limited in this embodiment of the present application.
  • the display color of the first request information refers to the color displaying the first request information, for example, the color can be red, green or yellow, etc., so that the first request information displayed in this color can distinguish the color, so that pedestrians can see the first request information in a timely manner; it can be understood that the specific content of the display color of the first request information can be set according to the actual application scene, which is not limited in the embodiment of the present application.
  • the duration of displaying the first request information refers to the duration of displaying the first request information, for example, the duration is 10 seconds, so that in this duration, even if pedestrians did not notice the first request in the first 5 seconds Information, pedestrians can also notice the first request information in the last 5 seconds, so that the vehicle can realize the intention interaction with pedestrians; it can be understood that the specific value of the duration of displaying the first request information can be set according to the actual application scenario.
  • the application examples are not limited.
  • FIG. 10 is a schematic flowchart of a control method provided by the embodiment of the present application.
  • the first request information includes For the indication information indicating the desired action, the desired action is associated with the road participation intention of pedestrians, therefore, the first module controls the target device in the vehicle to display the first request information in the target area, which can be understood as, when the target device is a projection system, When the target area is the ground, the first module controls the projection system to project the first request information on the ground; or, when the target device is a display device and the target area is a display screen, the first module controls the display device to display the first request information on the display screen.
  • the first module recognizes the actions made by pedestrians, so that the first module can know the intention of pedestrians to participate in the road, so that the first module can determine the driving strategy of the vehicle.
  • the first module controls the projection system to project the first request information on the ground.
  • the projection condition can be understood as no water or snow, this is because, if there is water or snow on the ground, the first request information displayed on the ground by the first module controlling the projection system is blurred, which will make Pedestrians cannot see the first request information clearly, which also makes pedestrians unable to express their road participation intentions; it can be understood that the specific implementation method of the first module to determine that the ground meets the projection conditions can be set according to the actual application scene, and the embodiment of this application does not limited.
  • the projection system can be a system pre-installed in the vehicle by the manufacturer, or a system installed in the vehicle by the user himself.
  • the first module can project the first request information on the ground by controlling the system.
  • the embodiment of the application does not limit the implementation manner in which the first module controls the projection system to project the first request information on the ground.
  • the first module controls the projection system to project the first request information on the ground.
  • the projection system is controlled to project the first request information on the ground.
  • the first module recognizes that the forward direction of the vehicle is at right angles to the pedestrian's moving direction through the movement direction of the pedestrian, so that the first module judges that the pedestrian is going to cross the road. Therefore, when there is no water on the ground, The first module can control the projection system to project the first request information on the ground according to the display mode of the first request information, or it can be understood that the projection system instructs the projection device to project the first request information on the ground;
  • the specific value of the angle between the moving directions of the pedestrians may also be set according to the actual application scenario, which is not limited in this embodiment of the present application.
  • the implementation manner in which the first module controls the projection system to project the first request information on the ground can also be set according to actual application scenarios, which is not limited in this embodiment of the present application.
  • the first module controls the display device to display the first request information on the display screen.
  • the display device may be a device pre-installed in the vehicle by the manufacturer, or a device installed in the vehicle by the user himself.
  • the display device may be a vehicle display screen, etc., and the display device may control the display screen.
  • the display screen can be a device pre-installed on the outside of the vehicle by the manufacturer, or Equipment that can be installed on the outside of the vehicle for the user himself, for example, a display screen can be placed on the roof, etc.
  • the installation location of the display device and the specific content of the display device can be set according to the actual application scenario, which is not limited in the embodiment of the present application; the size of the display screen and the installation location of the display screen can be set according to the actual application scenario , the embodiment of this application is not limited.
  • the implementation method of the first module controlling the display device to display the first request information on the display screen can refer to the content adaptation description of S1001, and will not be repeated here; it can be understood that the first module controls the display device to display the first request information on the display screen.
  • the implementation manner of displaying the first request information on the screen may also be set according to an actual application scenario, which is not limited in this embodiment of the present application.
  • S1003 The first module recognizes actions made by pedestrians.
  • the content of S1003 can refer to the content adaptation description of S802, and will not be repeated here; it can be understood that the specific implementation of the first module to identify the actions made by pedestrians can also be set according to the actual application scenario.
  • the application examples are not limited.
  • the first module determines the driving strategy of the vehicle.
  • the first module can be the first expected action according to the action made by the pedestrian Or the second expected action, knowing the road participation intention of the pedestrian, so that the first module can determine the driving strategy of the vehicle.
  • the first expected action is associated with the pedestrian's first road participation intention
  • the second expected action is associated with the pedestrian's second road participation intention.
  • the first road participation intention is that the pedestrian continues to walk at the next moment
  • the action made by the pedestrian is the second expected action
  • the second road participation intention is the next moment The pedestrian stops and waits
  • the action of the pedestrian is the first expected action
  • the first road participation intention is the pedestrian stops and waits at the next moment
  • the action of the pedestrian is the second expected action
  • the second road The participation intention is that the pedestrian continues to walk at the next moment.
  • the driving strategy of the vehicle determined by the first module is the next Stop and wait at all times; when the pedestrian's action is to raise his right hand, raising his right hand is the same as the second expected action. Since the second road participation intention is that the pedestrian stops and waits at the next moment, the driving of the vehicle determined by the first module The strategy is to keep driving.
  • the content of the pedestrian's road participation intention indicated by the first expected action or the second expected action may also be set according to actual application conditions, which is not limited in this embodiment of the present application.
  • FIG. 11 is a schematic diagram of an intention interaction provided by the embodiment of the present application.
  • the decision-making system can activate the projection system according to the first request information, and control the projection system to project the first request information on the ground. Therefore, after the pedestrian makes an action based on the first request information, the decision-making system can instruct the perception system to track Pedestrians and recognize the actions made by pedestrians.
  • the perception system judges that the action made by the pedestrian is the first expected action or the second expected action
  • the decision-making system can determine that the road participation intention of the pedestrian is the first road participation intention or the second expected action. Road participation intention.
  • the decision-making system can make decisions according to the pedestrian's first road participation intention or second road participation intention, and send the decision result to the executive structure.
  • the executive agency can be a braking system or a steering system. For example, when the decision result is to stop and wait, the brake system controls the vehicle to stop.
  • the first module when the ground meets the projection conditions, can control the projection system to project the first request information on the ground, or the first module can control the display device to display The first request information. In this way, the first module recognizes the action made by the pedestrian. Since the expected action is associated with the pedestrian’s road participation intention, the first module can determine that the action made by the pedestrian is the expected action.
  • the driving strategy of the vehicle in this way, even without the driver's participation, the pedestrian and the self-driving vehicle can achieve effective intention interaction, so that the self-driving vehicle can effectively understand the pedestrian's road participation intention, and then the self-driving vehicle can make Correct driving decisions, which are especially important for road safety of autonomous vehicles.
  • FIG. 12 is a schematic flow chart of a control method provided by the embodiment of the present application.
  • the first request information includes information for indicating multiple The indication information of a desired action, multiple expected actions are associated with multiple road participation intentions of pedestrians, so that the first module can know the road participation intentions of pedestrians by identifying the actions made by pedestrians, and then, the second A module can determine the driving strategy of the vehicle; as shown in Figure 12, the following steps can be included:
  • the first module controls the target device in the vehicle to display the first request information in the target area.
  • the first request information includes indication information for indicating multiple expected actions, wherein the multiple expected actions are associated with multiple road participation intentions of pedestrians, for example, when a pedestrian crosses the road, the expected action is to raise the left hand, and the road participation intention is that the pedestrian continues to cross the road at the next moment; the expected action is to raise the right hand, and the road participation intention is that the pedestrian stops and waits at the next moment; the expected action is raising both hands, and the road participation intention is that the pedestrian runs at the next moment
  • the specific corresponding relationship between the expected action and the road participation intention can also be set according to the actual application scenario, which is not limited in the embodiment of the present application.
  • S1202 The first module recognizes actions made by pedestrians.
  • the first module can recognize the action made by the pedestrian, and judge whether the action made by the pedestrian is any one of the expected actions. If different, the first module may execute S1203.
  • the first module controls the target device in the vehicle to display the second request information in the target area.
  • the first module finds that the action made by the pedestrian is not any of the expected actions by identifying the action made by the pedestrian, or that the pedestrian did not perform the action , which makes the first module unable to obtain the road participation intention of pedestrians.
  • the decision-making system on the vehicle can instruct the projection system to switch the projection content, that is, the first module can control the target device in the vehicle to display the second Request information, since the second request information is used to instruct the pedestrian to take the first road participation behavior, therefore, the first module can directly tell the pedestrian the action to be taken at the next moment through the displayed second request letter, that is, the pedestrian's next action.
  • the first road participation behavior is made at all times, so that invalid waiting time can be effectively saved and road traffic efficiency can be improved.
  • the first road participation behavior can be to stop and wait, continue to walk, walk across the road, or run across the road, etc. It can be understood that the specific content of the first road participation behavior can also be set according to the actual application scenario, The embodiment of this application is not limited.
  • the first module can also display the action that the pedestrian needs to perform.
  • the road participation behavior at any moment so that the first module can also determine the driving intention of the vehicle.
  • the second request information includes text information, static graphic information, video information or dynamic graphic information; it can be understood that the specific content of the second request information can also be set according to the actual application scene. Examples are not limited.
  • the first module may control the target device in the vehicle to display the second request information in the target area based on the display manner of the second request information.
  • the display mode of the second request information includes one or more of the following: the display position of the second request information, the display angle of the second request information, the display brightness of the second request information, the display color of the second request information Or display the duration of the second request information, the content of the display mode of the second request information can refer to the content adaptation description of the display mode of the first request information, and will not be repeated here; it can be understood that the display mode of the second request information
  • the specific content may also be set according to the actual application scenario, which is not limited in this embodiment of the present application.
  • the display method of the second request information may also include a countdown.
  • the countdown is used to remind pedestrians to pass the vehicle quickly.
  • the countdown is the time for the vehicle to pass the pedestrian, that is, the time for the pedestrian to wait.
  • the static graphic information or dynamic graphic information can also be a warning sign, which is used to indicate the driving strategy that the vehicle is about to execute, for example, the warning sign is a straight-going traffic sign, which is used to indicate that the vehicle is about to drive;
  • the specific content of the warning sign can be set according to the actual application scenario, which is not limited in this embodiment of the present application.
  • FIG. 13 is a schematic diagram of a second request information provided by the embodiment of the present application.
  • the first road participation behavior is walking.
  • the vehicle The control projection system projects the arrow symbol in the forward direction, and the text of pedestrians walking is displayed next to the symbol. Whether it is the arrow symbol in the forward direction or the text of pedestrians walking, it means that the vehicle informs pedestrians that they need to walk at the next moment.
  • the vehicle will stop at the next moment; among them, the 5 displayed next to the arrow symbol in the forward direction can be understood as the time for pedestrians to pass the vehicle is 5 seconds, or it is used to inform pedestrians that during this 5 seconds, pedestrians need to Pass the vehicle quickly so that the vehicle can continue driving after 5 seconds.
  • the vehicle in addition to projecting the arrow symbol in the forward direction or the text of pedestrians walking, the vehicle can also project the action of raising the left hand.
  • the action of raising the left hand indicates that the pedestrian needs to walk at the next moment. In this way, the vehicle passes the recognition
  • the action of the pedestrian raising his left hand can further determine the intention of the pedestrian to walk at the next moment, so that the vehicle will stop driving at the next moment.
  • FIG. 14 is a schematic diagram of a second request information provided by the embodiment of the present application.
  • the first road participation behavior is not walking, as shown in FIG. 14 , right in front of pedestrians.
  • the vehicle control projection system projects the English symbol of STOP, and the text of pedestrians not going is displayed next to the symbol. Whether it is the English symbol of STOP or the text of pedestrians not going, it indicates that pedestrians need to stop at the next moment.
  • the 10 displayed next to the STOP symbol can be understood as the time for pedestrians to stop for 10 seconds, or to inform pedestrians that vehicles can pass pedestrians quickly during this 10 seconds, so that, After 10 seconds, pedestrians can continue to walk.
  • the vehicle in addition to projecting the English sign of STOP or the words that pedestrians do not go, the vehicle can also project the action of raising the right hand.
  • the action of raising the right hand indicates that the pedestrian needs to stop at the next moment. In this way, the vehicle passes the recognition
  • the action of the pedestrian raising his right hand can further determine the intention of the pedestrian to stop at the next moment, so that the vehicle will continue to drive at the next moment.
  • the first module can control the projection system to project the second request information on the ground; or, when the target device is a display device, the target area When it is a display screen, the first module can control the display device to display the second request information on the display screen.
  • the first module determines the driving strategy of the vehicle.
  • the pedestrian after the first module displays the second request information, the pedestrian notices the second request information, therefore, the pedestrian can make the first road participation behavior based on the second request information, so that the first module can determine the vehicle's driving strategy.
  • the target area can also display the actions that pedestrians need to make.
  • the first module can further determine the road participation behavior of pedestrians by continuously identifying the actions made by pedestrians. For example, at the previous moment, the driving strategy of the vehicle was to stop and wait, and the action that the pedestrian displayed in the target area needs to perform is to raise the left hand, which reflects that the pedestrian will stop and wait at the next moment. Therefore, when the vehicle recognizes the pedestrian The vehicle can change from a parked and waiting state to a driving state when the left hand is raised and/or when a pedestrian stops and waits.
  • FIG. 15 is a schematic diagram of an intention interaction provided by the embodiment of the present application.
  • the decision-making system can instruct the projection system to switch the projection content, which is the second request information. Since the second request information is used to instruct pedestrians to make the first road participation behavior, the perception system Information, can re-track pedestrians and re-identify pedestrians' actions, so that the decision-making system can determine the pedestrian's road participation intention, therefore, the decision-making system can make decisions according to the pedestrian's road participation intention, and send the decision result to the executive structure, executive agency It can be a braking system or a steering system, etc. For example, when the decision result is to stop and wait, the brake system controls the vehicle to stop.
  • the vehicle can perform S1201-S1204 until the vehicle determines that the pedestrians road participation intentions.
  • FIG. 16 is a schematic flow chart of a control method provided by the embodiment of the present application.
  • the decision-making system can carry out path planning according to the intention of pedestrians.
  • the decision-making system determines the content and location of the interactive projection. Further, the decision-making system controls the projection system to determine the projection color. , light intensity and other information, and further, the projection system instructs the projection device to adjust the light for projection; at the same time, the decision-making system instructs the perception system to track pedestrians and recognize pedestrians' movements.
  • the system can send the matching result to the decision-making system, so that the decision-making system can plan the path according to the matching result.
  • the perception system can notify the decision-making system to switch the projection content, that is, the decision-making system Re-determine the content and location of the interactive projection, the projection system re-determines the projection color, light intensity and other information, and the projection device re-adjusts the light to perform projection.
  • the perception system re-tracks pedestrians and recognizes pedestrians' actions , and then, when the perception system recognizes that the action made by the pedestrian matches the expected action, the perception system sends the matching result to the decision-making system, so that the decision-making system performs path planning according to the matching result.
  • FIG. 17 is a schematic structural diagram of a control device provided by an embodiment of the present application.
  • the device includes a processor 1700 , a memory 1701 and a transceiver 1702 .
  • the processor 1700 is responsible for managing the bus architecture and general processing, the memory 1701 can store data used by the processor 1700 when performing operations, and the transceiver 1702 is used to receive and send data under the control of the processor 1700 for data communication with the memory 1701 .
  • the bus architecture may include any number of interconnected buses and bridges, specifically one or more processors represented by processor 1700 and various circuits of memory represented by memory 1701 are linked together.
  • the bus architecture can also link together various other circuits such as peripherals, voltage regulators, and power management circuits, etc., which are well known in the art and therefore will not be further described herein.
  • the bus interface provides the interface.
  • the processor 1700 is responsible for managing the bus architecture and general processing, and the memory 1701 can store data used by the processor 1700 when performing operations.
  • each step of the control process may be implemented by an integrated logic circuit of hardware in the processor 1700 or instructions in the form of software.
  • the processor 1700 may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may implement or execute the The disclosed methods, steps and logical block diagrams.
  • a general purpose processor may be a microprocessor or any conventional processor or the like.
  • the steps of the methods disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the memory 1701, and the processor 1700 reads the information in the memory 1701, and completes the steps of the signal processing process in combination with its hardware.
  • the processor 1700 is configured to read the program in the memory 1701 and execute the method flow described in the above embodiment.
  • FIG. 18 is a schematic structural diagram of another control device provided in the embodiment of the present application.
  • the control device provided in the embodiment of the present application can be in a vehicle.
  • the control device 1800 can be used in communication equipment, In circuits, hardware components or chips, the control device 1800 may include: a control unit 1801, an identification unit 1802, and a processing unit 1803, wherein the control unit 1801 is used to support the control device to perform information control steps, and the identification unit 1802 is used to support the control device Execute the step of information identification, and the processing unit 1803 is used to support the control device to execute the step of information processing.
  • control unit 1801 is configured to control the target device in the vehicle to display first request information in the target area; wherein, the first request information is used to request pedestrians to perform a target action, and the target action is used to express the road participation intention of pedestrians, The target area is within the visible range of pedestrians; the recognition unit 1802 is used to recognize actions made by pedestrians; the processing unit 1803 is used to determine the driving strategy of the vehicle according to the recognition result.
  • the first request information includes indication information for indicating the expected action, and the expected action is associated with the road participation intention; the processing unit 1803 is specifically configured to: according to the action made by the pedestrian as the expected action, Determine the driving strategy for the vehicle.
  • the expected action includes a first expected action and a second expected action
  • the first expected action is associated with the pedestrian's first road participation intention
  • the second expected action is associated with the pedestrian's second road participation intention
  • the processing unit 1803 is specifically configured to: determine the driving strategy of the vehicle according to whether the pedestrian's action is the first expected action or the second expected action.
  • the first request information includes indication information for indicating multiple expected actions, and the multiple expected actions are associated with multiple road participation intentions; the control unit 1801 is further configured to: If the action is not any one of the multiple expected actions, the target device in the vehicle is controlled to display the second request information in the target area, and the second request information is used to instruct pedestrians to perform the first road participation behavior.
  • the processing unit 1803 is specifically configured to: determine the driving strategy of the vehicle according to the pedestrian's first road participation behavior.
  • the second request information includes one or more of text information, static graphic information, video information, or dynamic graphic information.
  • the first request information includes one or more of text information, static graphic information, video information, or dynamic graphic information.
  • the target device is a projection system
  • the target area is an area outside the vehicle.
  • the target area is the ground
  • the processing unit 1803 is specifically configured to: control the projection system to project the first request information on the ground when the ground satisfies the projection condition.
  • the target device is a display device
  • the target area is a display screen
  • the processing unit 1803 is specifically configured to: control the display device to display the first request information on the display screen.
  • control device may further include: a storage unit 1804 .
  • the control unit 1801 , the identification unit 1802 , the processing unit 1803 and the storage unit 1804 are connected through a communication bus.
  • the storage unit 1804 may include one or more memories, and the memories may be devices used to store programs or data in one or more devices and circuits.
  • the storage unit 1804 can exist independently and be connected to the processing unit 1804 of the control device through a communication bus; the storage unit 1804 can also be integrated with the control unit 1801 , the identification unit 1802 and the processing unit 1804 .
  • FIG. 19 is a schematic structural diagram of a chip provided by an embodiment of the present application.
  • the chip 190 includes at least one processor 1910 and a communication interface 1930 .
  • the communication interface 1930 is used to input data from the outside to the chip 190 or output data from the chip 190 to the outside.
  • the processor 1910 is configured to run computer programs or instructions, so as to implement the above method embodiments.
  • chip 190 includes memory 1940 .
  • the memory 1940 stores the following elements: executable modules or data structures, or subsets thereof, or extensions thereof.
  • the memory 1940 may include a read-only memory and a random access memory, and provides instructions and data to the processor 1910 .
  • a part of the memory 1940 may also include a non-volatile random access memory (non-volatile random access memory, NVRAM).
  • the processor 1910 may control the decision-making system, the perception system, the projection system or the projection device to perform corresponding operations in the foregoing method embodiments by invoking the operation instructions stored in the memory 1940 .
  • the operation instructions stored in the memory 1940 can be instructions for controlling the decision-making system, so that the processor 1910 can control the decision-making system by retrieving the instructions from the memory 1940, and then the decision-making system can indicate the perception The system perceives information of pedestrians, or the decision-making system can activate the projection system, and further, the projection system controls the projection device to project the first request information or the second request information.
  • the memory 1940 , the communication interface 1930 and the memory 1940 are coupled together through the bus system 1919 .
  • the bus system 1919 may include not only a data bus, but also a power bus, a control bus, and a status signal bus.
  • the various buses are labeled bus system 1919 in FIG. 19 .
  • the steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a mature storage medium in the field such as random access memory, read-only memory, programmable read-only memory, or electrically erasable programmable read only memory (EEPROM).
  • the storage medium is located in the memory 1940, and the processor 1910 reads the information in the memory 1940, and completes the steps of the above method in combination with its hardware.
  • the instructions stored in the memory for execution by the processor may be implemented in the form of computer program products.
  • the computer program product may be written in the memory in advance, or may be downloaded and installed in the memory in the form of software.
  • a computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, special purpose computer, computer network, or other programmable apparatus.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, computer instructions may be transmitted from a website site, computer, server or data center by wire (eg Coaxial cable, optical fiber, digital subscriber line (digital subscriber line, DSL) or wireless (such as infrared, wireless, microwave, etc.) to another website site, computer, server or data center.
  • wire eg Coaxial cable, optical fiber, digital subscriber line (digital subscriber line, DSL) or wireless (such as infrared, wireless, microwave, etc.
  • Computer readable storage medium can be Any available media capable of being stored by a computer or a data storage device such as a server, data center, etc. integrated with one or more available media.
  • available media may include magnetic media (e.g., floppy disks, hard disks, or tapes), optical media (e.g., A digital versatile disc (digital versatile disc, DVD)), or a semiconductor medium (for example, a solid state disk (solid state disk, SSD)), etc.
  • Computer-readable media may include computer storage media and communication media, and may include any medium that can transfer a computer program from one place to another.
  • a storage media may be any target media that can be accessed by a computer.
  • the computer-readable medium may include compact disc read-only memory (compact disc read-only memory, CD-ROM), RAM, ROM, EEPROM or other optical disc storage; the computer-readable medium may include a magnetic disk memory or other disk storage devices.
  • any connected cord is properly termed a computer-readable medium.
  • the software is transmitted from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave, then coaxial cable, fiber optic cable, twisted pair, DSL or wireless technologies such as infrared, radio and microwave are included in the definition of medium.
  • Disk and disc includes compact disc (CD), laser disc, compact disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Reproduce data. Combinations of the above should also be included within the scope of computer-readable media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种应用于自动驾驶技术领域的方法,方法包括:控制车辆内的目标设备在目标区域显示第一请求信息,其中第一请求信息用于请求行人执行目标动作,目标动作表达了行人的道路参与意图;识别行人做出的动作,根据识别的结果,可以确定车辆的驾驶策略。还包括一种控制装置。由此,即使没有驾驶员的参与,车辆也可以通过第一请求信息询问行人的道路参与意图,使得自动驾驶场景中,车辆可以实现与行人的意图交互,从而获得准确的行人的道路参与意图,并得到合适的驾驶策略,从而提高驾驶的安全性。

Description

控制方法和装置
本申请要求于2021年5月25日提交中国专利局、申请号为202110574297.2、申请名称为“控制方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及自动驾驶技术领域,尤其涉及一种控制方法和装置。
背景技术
在车辆驾驶过程中,自动驾驶***对道路参与者的意图判定和预测是自动驾驶***路径规划的基础,同时也是事关道路安全的重要条件。
通常的,在道路参与者包括行人的情况下,自动驾驶***可以基于行人运动的方向等,预测行人的意图。例如,自动驾驶***可以基于行人运动的方向,利用机器学习算法,通过估计行人的运动轨迹,从而预测行人的意图。
但是,上述基于行人运动的预测方法,可能对行人意图的预测结果有偏差,导致驾驶的安全性降低。
发明内容
本申请实施例提供一种控制方法和装置,应用于自动驾驶技术领域,该方法包括:控制车辆内的目标设备在目标区域显示第一请求信息,由于第一请求信息用于请求行人执行目标动作,目标动作表达了行人的道路参与意图,因此,通过识别行人做出的动作,根据识别的结果,可以确定车辆的驾驶策略。这样,即使没有驾驶员的参与,车辆也可以通过第一请求信息询问行人的道路参与意图,使得自动驾驶场景中,车辆可以实现与行人的意图交互,从而获得准确的行人的道路参与意图,并得到合适的驾驶策略,从而提高驾驶的安全性。
第一方面,本申请实施例提供一种控制方法,该方法包括:控制车辆内的目标设备在目标区域显示第一请求信息;其中,第一请求信息用于请求行人执行目标动作,目标动作用于表达行人的道路参与意图,目标区域在行人的可视范围内;识别行人做出的动作;根据识别的结果,确定车辆的驾驶策略。这样,即使没有驾驶员的参与,车辆也可以通过第一请求信息询问行人的道路参与意图,使得自动驾驶场景中,车辆可以实现与行人的意图交互,从而获得准确的行人的道路参与意图,并得到合适的驾驶策略,从而提高驾驶的安全性。
一种可能的实现方式中,第一请求信息用于请求行人执行目标动作,包括:第一请求信息中包括用于指示期望动作的指示信息,期望动作与行人的道路参与意图相关联;根据识别的结果,确定车辆的驾驶策略,包括:根据行人做出的动作为期望动作,确定车辆的驾驶策略。这样,车辆通过指示行人执行期望动作,进而,可以根据行人做出的动作,确定车辆的驾驶策略,从而提高驾驶的安全性。
一种可能的实现方式中,期望动作包括第一期望动作和第二期望动作,第一期望动作与行人的第一道路参与意图相关联,第二期望动作与行人的第二道路参与意图相关联;根据识别的结果,确定车辆的驾驶策略,包括:根据行人做出的动作为第一期望动作或第二期望动作,确定车辆的驾驶策略。这样,可以根据行人做出的动作是第一期望动作还是第二期望动作,从而确定车辆的驾驶策略,进而提高驾驶的安全性。
一种可能的实现方式中,第一请求信息用于请求行人执行目标动作,包括:第一请求信息中包括用于指示多个期望动作的指示信息,多个期望动作与行人的多个道路参与意图相关联;方法还包括:根据行人做出的动作不为多个期望动作中任意一个,控制车辆内的目标设备在目标区域显示第二请求信息,第二请求信息用于指示行人做出第一道路参与行为。这样,通过直接告诉行人需要作出的第一道路参与行为,从而有效节省无效等待时间,提高道路通行效率。
一种可能的实现方式中,根据识别的结果,确定车辆的驾驶策略,包括:根据行人做出第一道路参与行为,确定车辆的驾驶策略。这样,车辆可以实现与行人的意图交互,进而提高驾驶的安全性。
一种可能的实现方式中,第二请求信息包括文字信息、静态的图形信息、视频信息或动态的图形信息中的一种或者多种。
一种可能的实现方式中,第一请求信息包括文字信息、静态的图形信息、视频信息或动态的图形信息中的一种或者多种。
一种可能的实现方式中,目标设备为投影***,目标区域为车辆外部的区域。
一种可能的实现方式中,目标区域为地面,控制车辆内的目标设备在目标区域显示第一请求信息,包括:在地面满足投影条件的情况下,控制投影***在地面上投影第一请求信息。这样,车辆通过在地面上投影的第一请求信息,使得行人可以注意到第一请求信息,进而行人可以向车辆表达自己的道路参与意图,从而车辆实现与行人的意图交互,进而提高驾驶的安全性。
一种可能的实现方式中,目标设备为显示设备,目标区域为显示屏,控制车辆内的目标设备在目标区域显示第一请求信息,包括:控制显示设备在显示上屏显示第一请求信息。这样,车辆通过在显示屏显示的第一请求信息,使得行人可以注意到第一请求信息,进而行人可以向车辆表达自己的道路参与意图,从而车辆实现与行人的意图交互,进而提高驾驶的安全性。
第二方面,本申请实施例提供一种控制装置,该装置可以用来执行上述第一方面及第一方面的任意可能的实现方式中的操作。例如,该装置可以包括用于执行上述第一方面或第一方面的任意可能的实现方式中的各个操作的模块或单元。比如包括控制单元、识别单元以及处理单元。
示例性的,控制单元,用于控制车辆内的目标设备在目标区域显示第一请求信息;其中,第一请求信息用于请求行人执行目标动作,目标动作用于表达行人的道路参与意图,目标区域在行人的可视范围内;识别单元,用于识别行人做出的动作;处理单元,用于根据识别的结果,确定车辆的驾驶策略。
一种可能的实现方式中,第一请求信息中包括用于指示期望动作的指示信息,期望动作与道路参与意图相关联;处理单元,具体用于:根据行人做出的动作为期望动作,确定车辆的驾驶策略。
一种可能的实现方式中,期望动作包括第一期望动作和第二期望动作,第一期望动作与行人的第一道路参与意图相关联,第二期望动作与行人的第二道路参与意图相关联;处理单元,具体用于:根据行人做出的动作为第一期望动作或第二期望动作,确定车辆的驾驶策略。
一种可能的实现方式中,第一请求信息中包括用于指示多个期望动作的指示信息,多个期望动作与多个道路参与意图相关联;控制单元,还用于:根据行人做出的动作不为多个期望动作中任意一个,控制车辆内的目标设备在目标区域显示第二请求信息,第二请求信息用于指示行人做出第一道路参与行为。
一种可能的实现方式中,处理单元,具体用于:根据行人做出第一道路参与行为,确定车辆的驾驶策略。
一种可能的实现方式中,第二请求信息包括文字信息、静态的图形信息、视频信息或动态的图形信息中的一种或者多种。
一种可能的实现方式中,第一请求信息包括文字信息、静态的图形信息、视频信息或动态的图形信息中的一种或者多种。
一种可能的实现方式中,目标设备为投影***,目标区域为车辆外部的区域。
一种可能的实现方式中,目标区域为地面,控制单元,具体用于:在地面满足投影条件的情况下,控制投影***在地面上投影第一请求信息。
一种可能的实现方式中,目标设备为显示设备,目标区域为显示屏,处理单元,具体用于:控制显示设备在显示屏上显示第一请求信息。
第三方面,本申请实施例提供一种控制装置,该装置包括存储器和处理器,存储器存储计算机程序指令,处理器运行计算机程序指令,以实现如第一方面及第一方面的各种可能的实现方式中描述的方法。
第四方面,本申请实施例提供一种车辆,该车辆包括如权利要求第二方面及第二方面的各种可能的实现方式中描述的装置。
一种可能的实现方式中,该车辆还包括感知***和目标设备,目标设备为投影***或者显示设备。
第五方面,本申请实施例提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序或指令,当计算机程序或指令在计算机上运行时,使得计算机执行如第一方面及第一方面的各种可能的实现方式中描述的方法。
第六方面,本申请实施例提供一种计算机程序产品,当计算机程序产品在处理器上运行时,使得控制装置执行第一方面及第一方面的各种可能的实现方式中描述的方法。
第七方面,本申请实施例提供一种控制***,该***包括:第二方面及第二方面的各种可能的实现方式中描述的装置。
第八方面,本申请提供一种芯片或者芯片***,该芯片或者芯片***包括至少一个处理器和通信接口,通信接口和至少一个处理器通过线路互联,至少一个处理器用于运行计算机程序或指令,以实现第一方面及第一方面的各种可能的实现方式中描述的方法。其中,芯片中的通信接口可以为输入/输出接口、管脚或电路等。
在一种可能的实现中,本申请中上述描述的芯片或者芯片***还包括至少一个存储器,该至少一个存储器中存储有指令。该存储器可以为芯片内部的存储单元,例如,寄存器、缓存等,也可以是该芯片的存储单元(例如,只读存储器、随机存取存储器等)。
应当理解的是,本申请的第二方面至第八方面与本申请的第一方面的技术方案相对应,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
附图说明
图1为本申请实施例提供的一种行人穿行马路的示意图;
图2为可能的设计中的一种车辆意图提醒的示意图;
图3为可能的设计中的另一种车辆意图提醒的示意图;
图4为本申请实施例提供的一种应用场景的示意图;
图5为本申请实施例提供的一种车载投射交互***的示意图;
图6为本申请实施例提供的一种可能的车辆的功能框图;
图7为本申请实施例提供的一种计算机***的结构示意图;
图8为本申请实施例提供的一种控制方法的流程示意图;
图9为本申请实施例提供的一种第一请求信息的示意图;
图10为本申请实施例提供的一种控制方法的流程示意图;
图11为本申请实施例提供的一种意图交互的示意图;
图12为本申请实施例提供的一种控制方法的流程示意图;
图13为本申请实施例提供的一种第二请求信息的示意图;
图14为本申请实施例提供的一种第二请求信息的示意图;
图15为本申请实施例提供的一种意图交互的示意图;
图16为本申请实施例提供的一种控制方法的流程示意图;
图17为本申请实施例提供的一种控制装置的结构示意图;
图18为本申请实施例提供的另一种控制装置的结构示意图;
图19为本申请实施例提供的一种芯片的结构示意图。
具体实施方式
为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。例如,第一值和第二值仅仅是为了区分不同的值,并不对其先后顺序进行限定。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
需要说明的是,本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本申请实施例中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以 是单个,也可以是多个。
在车辆驾驶过程中,可能会出现道路参与者穿行马路的场景,道路参与者包括行人、自行车等,以道路参与者为行人为例,示例性的,图1为本申请实施例提供的一种行人穿行马路的示意图,如图1所示,在行人穿行马路时,可能会出现两种情况,根据这两种情况可以确定车辆或行人的意图。
情况一:驾驶员停下车辆让行人通行,因此,车辆的意图为停下,行人的意图为通行。
情况二:驾驶员发现了道路上正在穿行马路的行人,驾驶员操作车辆停下,同时,行人也发现了道路上正在行驶的车辆,行人也停下,在车辆或行人同时停下来的情况下,驾驶员或行人可以通过简单的手势或者动作进行意图交互,这样,被礼让一方可以快速通过;例如,行人通过手势指示车辆可以继续前进,则车辆的意图为继续前行,行人的意图为停下。
而在自动驾驶***中,可能没有驾驶员的参与,自动驾驶***与行人之间,也就无法像驾驶员与行人一样,通过手势或者动作进行意图交互。因此,在道路交通参与者的意图判定和预测中,对于行人的意图判定和预测尤为困难。
可能的实现方式中,自动驾驶***可以基于道路参与者运动的方向、速度、道路拓扑等信息,预测道路参与者的意图。例如,自动驾驶***可以基于道路参与者运动的方向、速度、道路拓扑等信息,利用机器学习算法,通过估计道路参与者的运动轨迹,从而预测行人的意图。
但是,上述基于道路参与者运动的预测方法,适用于道路参与者有一定的运动轨迹和运动趋势的情况下,无法对静止的道路参与者的意图进行预测,若采用预测算法对静止的道路参与者进行意图判定,可能使得基于道路参与者运动的预测算法无法得到有效输入,也就不能准确地实现对道路参与者的意图预测和判定;而且,上述基于道路参与者运动的预测方法,可能对行人意图的预测结果有偏差,导致驾驶的安全性降低;而且,对行人意图的预测结果有偏差时,也可能导致误判场景的发生,从而引发危险,甚至导致交通安全事故的发生。
可能的实现方式中,在自动驾驶车辆的倒库移库场景中,自动驾驶***可以基于车灯技术,利用光信号投射装置在地面投射预警信息,实现对道路参与者的提醒;其中,光信号投射装置包括光源、透光镜以及反射镜,光源发散的光束经过反射镜反射,再经过透光镜上设计的不同形状、不同符号、不同图案或不同文字,则可以在地面显示提醒道路参与者的信息。
示例性的,图2为可能的设计中的一种车辆意图提醒的示意图,如图2所示,在车辆倒库场景中,预警信息为倒车指示信息,自动驾驶***利用光信号投射装置将倒车指示信息投射在车辆尾部的倒车区域,实现对车辆后方或者后方穿行的道路参与者的提醒;其中,光信号投射装置可以安装在车辆尾部,倒车区域可以为车辆尾部区域、停车线1和停车线1之间组成的矩形区域,倒车指示信息可以为文字、形状或符号等,该文字、形状或符号用于指示车辆正在进行倒车或即将进行倒车。
示例性的,图3为可能的设计中的另一种车辆意图提醒的示意图,如图3所示,在车辆静止的场景中,例如,车辆停靠道路旁时,预警信息为开门指示信息,自动驾驶***利用光信号投射装置将开门指示信息投射在地面,实现对车辆侧方道路参与者的提醒;其中,光信号投射装置可以安装在车门,开门指示信息可以为文字、符号或形状等,该文字、符 号或形状用于指示车门正在打开或车门即将打开。
但是,自动驾驶***基于光信号投射装置,实现的是对道路参与者的单方向的预警提醒,不能保证道路参与者注意到预警信息、道路参与者理解预警信息以及道路参与者根据预警信息进行行动等,这样,自动驾驶***无法准确地确定道路参与者的意图。
基于此,本申请实施例提供一种控制方法和装置,应用于自动驾驶技术领域,该方法包括:控制车辆内的显示设备在目标区域显示第一请求信息,由于第一请求信息用于请求行人执行目标动作,目标动作表达了行人的道路参与意图,因而,通过识别行人做出的动作,根据识别的结果,可以确定车辆的驾驶策略。这样,即使没有驾驶员的参与,车辆也可以通过第一请求信息询问行人的道路参与意图,使得自动驾驶场景中,车辆也可以实现与行人意图交互,并得到合适的驾驶策略,从而提高驾驶的安全性。
本申请实施例的方法可以应用于行人意图穿行马路,或者,行人站在马路边的场景,通过本申请实施例的方法,自动驾驶车辆可以与行人进行意图交互;示例性的,图4为本申请实施例提供的一种应用场景的示意图,如图4所示,车辆识别到行人站在马路边,但是,在下一时刻,车辆无法确定行人是继续站在马路边,还是穿行马路,因此,车辆可以在地面显示第一请求信息,行人可以根据第一请求信息做出动作,这样,即使没有驾驶员的参与,车辆也可以通过识别行人做出的动作理解行人的道路参与意图,从而车辆实现与行人的意图交互。
在图4所示的应用场景的基础上,示例性的,图5为本申请实施例提供的一种车载投射交互***的示意图,如图5所示,该***包括决策***、感知***和投影控制***(后续称为投影***),投影***包括投影装置;其中,在该***中包括决策***和感知***之间的接口、决策***与投影***之间的接口以及投影***和投影装置之间的接口。
如图5所示,决策***可以基于与感知***之间的接口中传递的信息,使得决策***可以激活投影装置,这样,投影***基于与决策***之间的接口中传递的信息,可以指示投影装置投影请求信息,进一步地,投影装置基于与投影***之间的接口中传递的信息,可以在目标区域显示请求信息;其中,请求信息可以包括第一请求信息或第二请求信息。
其中,决策***和感知***之间的接口中传递的信息,一方面,传递的信息可以体现为:决策***指示感知***感知的行人的信息,行人的信息包括但不限于追踪行人、识别行人做出的动作或识别行人做出动作的持续时间等;另一方面,传递的信息可以体现为:感知***为决策***输入的感知动作信息,感知动作信息包括但不限于是否行人做出的动作与期望动作匹配等信息;其中,行人做出的动作与期望动作匹配可以理解为,行人做出的动作为第一期望动作或第二期望动作,行人做出的动作与期望动作不匹配可以理解为,行人做出的动作不为多个期望动作中任意一个,或者,行人未执行动作。
其中,决策***与投影***之间的接口中传递的信息,以及投影***和投影装置之间的接口中传递的信息,通过以下两种情况进行描述:
情况1:在决策***无法识别行人的意图的情况下,决策***与投影***之间的接口中传递的信息可以体现为:决策***确定的请求信息,即决策***确定的第一请求信息,这样,投影***基于决策***确定的第一请求信息,可以指示投影装置;由于投影装置与投影***之间的接口中传递的信息可以体现为在目标区域显示的第一请求信息,因此,投影装置基于与投影***之间的接口中传递的信息可以在目标区域显示第一请求信息;其中,第一请求信息包括投影内容、投影内容的显示位置、投影内容的持续时间、投影内容的显 示角度、投影内容的显示亮度或投影内容的显示颜色等内容中的至少一项。
情况2:在行人执行的动作与期望动作不匹配的情况下,即,行人做出的动作不为多个期望动作中的任意一个,决策***和投影***之间的接口中传递的信息可以体现为:决策***确定的切换后的请求信息,即决策***确定的第二请求信息,这样,投影***基于决策***确定的第二请求信息,可以指示投影装置;由于投影装置与投影***之间的接口中传递的信息可以体现为在目标区域显示的第二请求信息,因此,投影装置基于与投影***之间的接口中传递的信息可以在目标区域显示第二请求信息;其中,第二请求信息包括但不限于投影内容的显示位置、投影内容的持续时间、投影内容的显示角度、投影内容的显示亮度或投影内容的显示颜色等内容中的至少一项。
可以理解的是,决策***与感知***之间的接口中传递的信息可以包括指示感知***感知行人的指令,这样,感知***基于该指令可以感知行人的信息;决策***与投影***之间的接口中传递的信息也可以包括激活投影***的指令,这样,投影***基于该指令可以被激活;投影装置与投影***之间的接口中传递的信息可以包括指示投影装置进行投影的指令,这样,投影装置基于该指令可以投影第一请求信息或第二请求信息。
综上所述,车辆通过决策***和感知***之间的接口中传递的信息、决策***与投影***之间的接口中传递的信息,以及投影***和投影装置之间的接口中传递的信息,可以在目标区域显示第一请求信息或第二请求信息,从而感知***可以根据第一请求信息,识别行人做出的动作,或者,感知***可以根据第二请求信息,识别行人做出的道路参与行为,使得决策***可以确定行人的意图,进而,决策***可以确定车辆的驾驶策略。
根据图5所示的车载投射交互***,车辆可以与行人进行双向信息的交互,从而车辆可以明确行人的意图,进而,车辆可以进行安全行车控制和决策。
在图4所示的应用场景的基础上,示例性的,图6为本申请实施例提供的一种可能的车辆600的功能框图,如图6所示,车辆600可以被配置为完全或部分地自动驾驶模式,其中,车辆600可以为轿车、卡车、摩托车、公共汽车、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车或手推车等,本申请实施例不作具体限定。
可能的方式中,当车辆600处于部分地自动驾驶模式时,车辆600确定车辆及其周边环境的当前状态后,用户基于该当前状态操作车辆600。例如,车辆600确定周边环境中的行人的可能行为,车辆可以根据行人的可能行为控制车辆内的目标设备在目标区域显示第一请求信息,行人可以根据第一请求信息做出动作,这样,在识别行人做出的动作后,车辆可以通过语音将行人的道路参与意图通知给用户,从而用户可以对车辆执行与行人的道路参与意图相关的操作。
可能的方式中,在车辆600处于完全地自动驾驶模式中时,车辆600可以自动执行驾驶相关操作。例如,车辆600确定周边环境中的行人的可能行为,并根据行人的可能行人控制车辆内的目标设备在目标区域显示第一请求信息,车辆识别行人做出的动作,并根据识别的结果确定行人的道路参与意图,从而车辆可以自动执行与行人的道路参与意图相关的操作。
如图6所示,车辆600包括:行进***202、传感器***204、控制***206、一个或多个***设备208、计算机***212、电源210以及用户接口216。可选地,车辆600可包括更多或更少的子***,并且每个子***可包括多个元件。其中,车辆600的每个子***和元件可以通过有线或者无线互连。
在图6中,行进***202包括:引擎218、传动装置220、能量源219及车轮221。
可能的方式中,传感器***204包括感测关于车辆600周边的环境的信息的若干个传感器。例如,传感器***204可包括:定位***222、惯性测量单元(inertial measurement unit,IMU)224、毫米波雷达226、激光雷达228以及摄像头230。其中,定位***222可以是全球定位***(global positioning system,GPS),也可以是北斗***或者其他定位***。
可能的方式中,定位***222可用于估计车辆600的地理位置,IMU 224用于基于惯性加速度感测车辆600的位置和朝向变化。在一些实施例中,IMU 224可以是加速度计和陀螺仪的组合。
可选的,传感器***204还可以包括:被监视车辆600的内部***的传感器(例如,车内空气质量监测器、燃油量表和/或机油温度表等)。来自这些传感器中的一个或多个的传感器数据可用于检测和识别对象及其相应特性(例如,位置、形状、方向和/或速度等),这种检测和识别是车辆600进行自主安全操作的关键功能。
可能的方式中,毫米波雷达226可利用无线电信号感测车辆600的周边环境内的物体。例如,车辆可以通过毫米波雷达226追踪行人、识别行人做出的动作或者识别行人做出动作的持续时间等。在一些实施例中,除了感测物体以外,毫米波雷达226还可用于感测物体的速度和/或前进方向。
可能的方式中,激光雷达228可利用激光感测车辆600所位于的环境中的物体。在一些实施例中,激光雷达228可包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他***组件。
可能的方式中,摄像头230可用于捕捉车辆600的周边环境的多个图像。例如,摄像头230可以捕捉车辆周围的环境数据或图像数据,车辆基于环境数据或图像数据对行人的道路参与意图进行预测,从而确定是否控制车辆内的目标设备在目标区域显示第一请求信息,其中,摄像头230可以是静态摄像头或视频摄像头。
结合图5所示的***,在图6中,传感器***可以为感知***,这样,通过传感器***中的毫米波雷达226或激光雷达228可以追踪行人,或者,可以识别行人做出的动作,从而获得决策***与感知***之间的接口中传递的信息,即感知动作信息。
在图6中,控制***206为控制车辆600及其组件的操作,控制***206可包括各种元件。例如,控制***206可包括:转向***232、油门234、制动单元236、计算机视觉***240、路线控制***242、障碍规避***244以及投影控制***254中的至少一个。可以理解的是,在一些实例中,控制***206可以增加或替换地包括除了所示出和描述的那些以外的组件,或者也可以减少一部分上述示出的组件。
本申请实施例中,投影控制***254可以指示投影装置,进而,投影装置投影第一请求信息或第二请求信息。
可能的方式中,转向***232可操作来调整车辆600的前进方向。例如在一个实施例中可以为方向盘***;油门234用于控制引擎218的操作速度并进而控制车辆600的速度;制动单元236用于控制车辆600减速,制动单元236可使用摩擦力来减慢车轮221。在其他实施例中,制动单元236可将车轮221的动能转换为电流,制动单元236也可采取其他形式来减慢车轮221转速从而控制车辆600的速度。
可能的方式中,计算机视觉***240可以处理和分析由摄像头230捕捉的图像,以便 计算机视觉***240识别车辆600周边环境中的物体和/或物体的特征。其中,物体和/或物体的特征可包括:交通信号、道路边界或障碍物。计算机视觉***240可使用物体识别算法、运动中恢复结构(structure from motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉***240可以用于为环境绘制地图、跟踪物体、估计物体的速度等。
可能的方式中,路线控制***242可用于确定车辆600的行驶路线,障碍规避***244可用于识别、评估和避开或者以其他方式越过车辆600的环境中的潜在障碍物。
车辆600通过***设备208与外部传感器、其他车辆、其他计算机***或用户之间进行交互。例如,***设备208可包括:无线通信***246、车载电脑248、麦克风250以及扬声器252。可能的方式中,无线通信***246可以直接地或者经由通信网络与一个或多个设备无线通信。
车辆600的部分或所有功能受计算机***212控制。计算机***212可包括至少一个处理器213,处理器213执行存储在数据存储装置214中的指令215。计算机***212还可以是采用分布式方式,控制车辆600的个体组件或子***的多个计算设备。
可能的方式中,处理器213可以是任何常规的处理器,诸如商业可获得的中央处理器(central processing unit,CPU)。替选地,该处理器可以是诸如用于供专门应用的集成电路(application specific integrated circuit,ASIC)或其它基于硬件的处理器的专用设备。在此处所描述的各个方面中,处理器213可以位于远离该车辆且与该车辆进行无线通信的位置。在其它方面中,此处所描述的一些过程可以由布置于车辆内的处理器上执行,而此处所描述的另一些过程可以由远程处理器执行,这些过程包括采取执行单一操纵的必要步骤。
可能的方式中,数据存储装置214可包含指令215(例如,程序逻辑指令),指令215可被处理器213处理,以便处理器213执行车辆600的各种功能,这些功能包括以上所描述的功能。数据存储装置214也可包含额外的指令,包括向推进***202、传感器***204、控制***206和***设备208中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。
除了指令215以外,数据存储装置214还可存储数据,例如道路地图、路线信息、车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在车辆600在自主、半自主和/或手动模式中操作期间被车辆600和计算机***212使用。
可能的方式中,用户接口216,用于向车辆600的用户提供信息或从其接收信息。可选地,用户接口216可包括在***设备208的集合内的一个或多个输入/输出设备,例如无线通信***246、车载电脑248、麦克风250和扬声器252。
计算机***212可基于从各种子***(例如,行进***202、传感器***204和控制***206)以及从用户接口226接收的输入来控制车辆600的功能。例如,计算机***212可利用来自控制***206的输入,以便控制转向***232避免由传感器***204和障碍规避***244检测到的障碍物。在一些实施例中,计算机***212可对车辆600及其子***的许多方面提供控制。
可选地,上述这些组件中的一个或多个可与车辆600分开安装或关联。例如,数据存储装置214可以部分或完全地与车辆600分开存在。上述组件可以按有线和/或无线方式耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实 际需要增添或者删除,图6不应理解为对本申请实施例的限制。
除了提供调整自动驾驶汽车的速度或行驶路线的指令之外,计算设备还可以提供修改车辆600的转向角的指令,以使得自动驾驶汽车遵循给定的轨迹和/或维持与自动驾驶汽车附近的障碍物(例如,道路上的相邻车道中的车辆)的安全横向和纵向距离。
为了更好地描述图6中所示的计算机***212,示例性的,图7为本申请实施例提供的一种计算机***212的结构示意图。
如图7所示,计算机***212包括处理器213、显示适配器(video adapter)107、收发器123、摄像头155和通用串行总线(universal serial bus,USB)端口125等中的至少一个。其中,收发器123可以发送和/或接受无线电通信信号,摄像头155可以捕捉静态数字视频图像和动态数字视频图像。
可能的方式中,处理器213和***总线105耦合,***总线105通过总线桥111和输入输出(input/output,I/O)总线耦合,I/O总线和I/O接口115耦合,I/O接口115可以和多种I/O设备进行通信。例如,I/O设备可以为输入设备117(如:键盘,鼠标,触摸屏等)或多媒体盘(media tray)121(例如,紧凑型光盘只读储存器(compact disc read-only memory,CD-ROM)、多媒体接口等)。可选的,和I/O接口115相连接的接口可以是通用串行总线(universal serial bus,USB)接口。
可能的方式中,处理器213可以是一个或者多个处理器,其中每个处理器都可以包括一个或多个处理器核;处理器113可以是任何传统处理器,包括精简指令集计算机(reduced instruction set computing,RISC)、复杂指令集计算机(complex instruction set computing,CISC)或上述的组合。
可选地,处理器可以是诸如专用集成电路(application specific integrated circuit,ASIC)的专用装置;或者,处理器213可以是神经网络处理器或者是神经网络处理器和上述传统处理器的组合。
可能的方式中,计算机***212可以通过网络接口129和软件部署服务器149通信。例如,网络接口129可以是硬件网络接口(例如,网卡)。网络127可以是外部网络(例如,因特网),也可以是内部网络(例如,以太网或者虚拟专用网络(virtual private network,VPN))。可选地,网络127还可以是无线网络(例如,无线保真(wireless-fidelity,WiFi)网络,蜂窝网络)。
可能的方式中,应用程序143包括控制汽车自动驾驶相关程序147和投影相关程序148。例如,自动驾驶相关的程序147可以包括管理自动驾驶的汽车和路上障碍物交互的程序、控制自动驾驶汽车路线或者速度的程序、控制自动驾驶汽车和路上其他自动驾驶汽车交互的程序等;例如,投影相关程序148可以包括投影第一请求信息或第二请求信息的程序等。
可以理解的是,当车辆通过投影相关程序148在目标区域显示第一请求信息或第二请求信息后,进一步地,车辆可以通过自动驾驶相关程序147控制车辆的路线或速度等。
应用程序143可以存在于软件部署服务器(deploying server)149的***上。在一些实施例中,在需要执行应用程序143时,计算机***可以从软件部署服务器149下载应用程序143。
传感器153和计算机***212关联,传感器153用于探测计算机***212周围的环境。例如,传感器153可以探测动物、汽车、障碍物或人行横道等物体;进一步地,传感器还 可以探测上述动物、汽车、障碍物或人行横道等物体周围的环境,例如,环境可以为动物周围的天气条件、动物周围的环境的光亮度、动物周围出现的其他动物等。可选地,若计算机***212安装在自动驾驶的汽车上,传感器可以是摄像头、红外线感应器、化学检测器或麦克风等。
结合图5所示的***,图7所示的处理器213可以为决策***,或者,处理器213中包括决策***,这样,车辆通过处理器213可以实现与行人的意图交互。
结合图5所示的***,图7所示的传感器153可以为感知***,车辆通过传感器153可以追踪行人、识别行人做出的动作或识别行人做出动作的持续时间等,进而,车辆可以获得处理器213与传感器153之间的接口中传递的信息,即,车辆获得决策***与感知***之间的接口中传递的信息。
下面以具体的实施例对本申请实施例的技术方案以及本申请实施例的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以独立实现,也可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。
需要说明的是,下面所描述的步骤的执行主体可以是车辆、车辆中的芯片或车辆中的模块等,为了便于描述,以执行主体为车辆中的模块(后续称为第一模块)为例对下述具体实施例进行示例性说明;其中,第一模块可以为多域控制器(multi domain controller,MDC)等。可以理解,第一模块的具体内容,也可以根据实际应用场景设定,本申请实施例不作限定。
示例性的,图8为本申请实施例提供的一种控制方法的流程示意图,如图8所示,可以包括以下步骤:
S801:第一模块控制车辆内的目标设备在目标区域显示第一请求信息。
本申请实施例中,目标设备可以为厂商预安装好在车辆内的设备,也可以为用户自己安装在车辆内的设备,第一模块通过控制车辆内的目标设备在目标区域显示第一请求信息,该控制过程可以是自动触发的,也可以是手动触发的,这样,车辆可以基于第一请求信息询问行人的道路参与意图;可以理解,目标设备的安装位置,可以根据实际应用场景设定,本申请实施例不作限定。
本申请实施例中,目标区域在行人的可视范围内,例如,目标区域为车辆与行人之间的地面区域,这样,第一模块控制车辆内的目标设备在目标区域显示第一请求信息,使得行人可以在目标区域知道车辆显示的第一请求信息,从而行人基于第一请求信息可以执行与自己的道路参与意图相关的动作,从而行人表达了自己的道路参与意图;可以理解,目标区域的具体内容,也可以根据实际应用场景设定,本申请实施例不作限定。
本申请实施例中,第一请求信息用于请求行人执行目标动作,由于目标动作用于表达行人的道路参与意图,这样,第一模块基于第一请求信息,可以知道行人在下一时刻的道路参与意图,即,行人是继续行走还是停下来;其中,目标动作可以是预先设定的动作,该动作在车辆和行人之间是互相知道,这样,才可以保证在行人做出动作后,车辆可以理解行人做出的动作所表示的道路参与意图。例如,行人举左手,意味着行人在下一时刻行走,行人举右手,意味着行人在下一时刻停下来。
其中,行人可以指的是在道路边行走的行人、意图穿行马路的行人或者骑自行车的行人等;可以理解,目标动作的具体内容,也可以根据实际应用场景设定,本申请实施例不作限定。
本申请实施例中,第一请求信息可以包括文字信息、静态的图形信息、视频信息或动态的图形信息中的一种或多种;可以理解,第一请求信息的具体内容,也可以根据实际应用场景设定,本申请实施例不作限定。
其中,文字信息,指的是行人走或行人不走的文字,且在行人走或行人不走的文字旁边也显示了用于请求行人需要执行的目标动作,例如,行人走的文字旁边显示的是举左手,行人不走的文字旁边显示的是举右手。
其中,静态的图形信息,指的是行人走或行人不走的静态图形,示例性的,图9为本申请实施例提供的一种第一请求信息的示意图,如图9所示,在行人的正前方,行人走的静态图形为向前方向的箭头符号,行人不走的静态图形为向后方向的箭头符号,且在向前方向的箭头符号或往后走的箭头符号的符号旁边也显示了请求行人需要执行的目标动作,例如,向前方向的箭头符号旁边显示的是举左手,向后方向的箭头符号旁边显示的是举右手。
其中,动态的图形信息,指的是行人走或行人不走的动态图形,例如,在行人的正前方,行人走的动态图形为一步一步向前推进的箭头符号,行人不走的动态图形为一步一步向后推进的箭头符号,且在一步一步向前推进的箭头符号或一步一步向后推进的箭头符号的符号旁边也显示了请求行人执行的目标动作,例如,一步一步向前推进的箭头符号旁边显示的是举左手,一步一步向后推进的箭头符号旁边显示的是举右手。
可以理解的是,静态的图形信息或动态的图形信息可以为交通标志中的通行指示信息,例如,直行标志或向左转弯标志等,通过该通行指示信息,行人可以理解第一请求信息中请求的需要执行的目标动作。
其中,视频信息,指的是行人走或行人不走的视频,例如,在行人的正前方,行人走的视频为动态的画面,行人不走的视频为静止的画面,且在动态的画面或静止的画面的旁边也显示了请求行人需要执行的目标动作,例如,动态的画面旁边显示的是举左手,静止的画面旁边显示的是举右手。
需要说明的是,在文字信息旁边、静态的图形信息旁边、视频信息旁边或动态的图形信息旁边显示的举右手或举左手,可以为举左手或举右手的文字,也可以为举左手或举右手的动态画面,本申请实施例不作限定。
需要说明的是,请求的行人需要执行的目标动作为举左手或举右手是一种示例,也可以为行人向左移动或向右移动,也可以采用其他方式进行设定,本申请实施例不作限定。
需要说明的是,举左手表示行人走和举右手表示行人不走是一种示例,也可以为举左手表示行人不走和举右手表示行人走,也可以采用其他方式进行设定,本申请实施例不作限定。
需要说明的是,第一请求信息里包括的走或不走的文字,也可以YES或NO来表示,其中,YES表示走,NO表示不走,也可以采用其他方式进行设定,本申请实施例不作限定。
本申请实施例中,第一模块控制车辆内的目标设备在目标区域显示第一请求信息,可能的实现方式为:基于一定的触发条件,第一模块可以控制车辆内的目标设备在目标区域显示第一请求信息。
例如,车辆在行驶的过程中,若触发条件为车辆紧急刹车,则第一模块可以控制目标设备在车辆与行人之间的地面区域显示第一请求信息,这样,行人可以基于第一请求信息 做出动作,从而第一模块可以根据行人做出的动作知道行人的道路参与意图;可以理解,触发条件的具体内容,也可以根据实际应用场景设定,本申请实施例不作限定。
S802:第一模块识别行人做出的动作。
本申请实施例中,第一模块识别行人做出的动作的可能的实现方式为:第一模块可以基于传感器,识别行人做出的动作。
示例性的,车辆上的传感器可以通过距离速度二维傅里叶变换的方法,得到行人对应的距离速度图,传感器通过恒虚警算法对距离速度图进行检测得到行人对应的点云目标检测结果,再通过角度傅里叶变换得到行人的点云目标的角度,传感器通过聚类算法将点云目标聚类成目标,从而传感器基于多帧数据,利用跟踪算法追踪到行人,进一步地,传感器通过短时傅里叶变换或小波变换等时频分析方法对行人的多帧数据进行分析,从而可以识别行人做出的动作。
可以理解的是,第一模块识别行人做出的动作的实现方式,也可以根据实际应用场景设定,本申请实施例不作限定。
需要说明的是,基于第一请求信息,行人可能会在一段时间内都在做出动作,因此,车辆上的传感器也需要在该段时间内持续跟踪并识别行人做出的动作。
S803:第一模块根据识别的结果,确定车辆的驾驶策略。
本申请实施例中,识别的结果可以是行人表达了自己的道路参与意图,识别的结果也可以是行人没有表达出自己的道路参与意图,这样,第一模块根据不同的识别的结果,可以确定不同的驾驶策略。
在识别的结果是行人表达了自己的道路参与意图的情况下,第一模块可以基于行人的道路参与意图,确定车辆的驾驶策略。
例如,第一模块识别行人穿行马路时,第一模块可以在目标区域显示第一请求信息,且第一请求信息用于请求行人需要执行目标动作的文字可以为举左手或举右手,且在举左手的文字旁边显示走,在举右手的文字旁边显示不走,走表示的是行人继续穿行马路,不走表示的是行人在马路上停下来,在行人做出的动作为举右手的情况下,第一模块可以确定行人的意图为在马路上停下来,或者可以理解为,第一模块所确定的车辆的驾驶策略为继续行驶;可以理解,第一模块确定车辆的驾驶策略的实现方式,可以根据实际应用场景设定,本申请实施例不作限定。
在识别的结果是行人没有表达出自己的道路参与意图的情况下,由于第一模块不知道行人的道路参与意图,因此,第一模块可以执行预设策略,从而第一模块基于预设策略,可以确定车辆的驾驶策略。
例如,第一模块识别行人穿行马路,第一模块可以在目标区域显示第一请求信息,且第一请求信息用于请求行人执行目标动作的文字可以为举左手或举右手,且在举左手的文字旁边显示走,在举右手的文字旁边显示不走,走表示的是行人继续穿行马路,不走表示的是行人在马路上停下来,但是,行人执行的动作为双手合掌,第一模块无法确定行人的道路参与意图,因此,第一模块可以执行车辆在马路上停下来的预设策略,直到行人穿过马路,车辆才开始继续行驶;可以理解的是,第一模块确定车辆的驾驶策略的实现方式,也可以根据实际应用场景设定,本申请实施例不作限定。
综上所述,在本申请实施例中,第一模块通过控制车辆内的目标设备在目标区域显示第一请求信息,这样,行人可以基于第一请求信息做出动作,从而第一模块通过识别行人 做出的动作,可以确定车辆的驾驶策略,使得在自动驾驶***中,即使没有驾驶员的参与,车辆也可以和行人进行意图交互,从而避免交通事故的发生,进而提高驾驶的安全性。
在车辆的自动驾驶场景中,当第一模块无法识别行人的意图时,第一模块可以基于第一请求信息的显示方式,进而控制车辆内的目标设备在目标区域显示第一请求信息。例如,第一请求信息的显示方式包括下述的一种或多种:第一请求信息的显示位置、第一请求信息的显示角度、第一请求信息的显示亮度、第一请求信息的显示颜色或显示第一请求信息的持续时间;可以理解,第一请求信息的显示方式的具体内容,也可以根据实际应用场景设定,本申请实施例不作限定。
其中,第一请求信息的显示位置指的是显示第一请求信息的具***置,该位置可以为车辆外部的区域,例如,地面、建筑物或车辆的车身等,车辆的车身可以包括前挡风玻璃、后挡风玻璃或车窗中的至少一项等,该车辆可以指的是本车辆,也可以指的是其他车辆;可以理解,第一请求信息的显示位置的具体内容,也可以根据实际应用场景设定,本申请实施例不作限定。
需要说明的是,由于行人在车辆的前面,因此,车辆在后挡风玻璃显示第一请求信息时,车辆也需要在前挡风玻璃显示第一请求信息,前挡风玻璃显示的第一请求信息用于请求行人做出目标动作,后挡风玻璃显示的第一请求信息可以用于提醒该车辆后面的其他车辆;可以理解,车辆在前挡风玻璃和后挡风玻璃显示第一请求信息的具体实现方式,本申请实施例不作限定。
其中,第一请求信息的显示角度指的是显示第一请求信息的角度,从车辆的角度考虑,第一请求信息的显示角度可以为车辆正前方向右偏60度等,从行人的角度考虑,第一请求信息的显示角度可以为行人的正前方等;可以理解,第一请求信息的显示角度的具体内容,也可以根据实际应用场景设定,本申请实施例不作限定。
其中,第一请求信息的显示亮度指的是显示第一请求信息的亮度,例如,该亮度数值可以为50,这样,通过该亮度可以突出第一请求信息,从而使得行人可以及时看到第一请求信息;可以理解,第一请求信息的显示亮度的具体值,可以根据实际应用场景设定,本申请实施例不作限定。
其中,第一请求信息的显示颜色指的是显示第一请求信息的颜色,例如,该颜色可以为红色、绿色或黄色等,这样,通过该颜色显示的第一请求信息,可以区别目标区域的颜色,使得行人可以及时看到第一请求信息;可以理解,第一请求信息的显示颜色的具体内容,可以根据实际应用场景设定,本申请实施例不作限定。
其中,显示第一请求信息的持续时间指的是显示第一请求信息的持续时间,例如,持续时间为10秒,这样,在这个持续时间里,即使行人在前5秒没有注意到第一请求信息,行人也可以在后5秒注意到第一请求信息,从而使得车辆实现与行人的意图交互;可以理解,显示第一请求信息的持续时间的具体值,可以根据实际应用场景设定,本申请实施例不作限定。
结合上述描述,在图8所示的实施例的基础上,示例性的,图10为本申请实施例提供的一种控制方法的流程示意图,在本申请实施例中,第一请求信息包括用于指示期望动作的指示信息,期望动作与行人的道路参与意图相关联,因此,第一模块控制车辆内的目标设备在目标区域显示第一请求信息,可以理解为,在目标设备为投影***,目标区域为地面的情况下,第一模块控制投影***在地面上投影第一请求信息;或者,在目标设备为 显示设备,目标区域为显示屏的情况下,第一模块控制显示设备在显示屏上显示第一请求信息,这样,第一模块通过识别行人做出的动作,使得第一模块可以知道行人的道路参与意图,从而第一模块可以确定车辆的驾驶策略。
如图10所示,可以包括以下步骤:
S1001:在地面满足投影条件的情况下,第一模块控制投影***在地面上投影第一请求信息。
本申请实施例中,投影条件可以理解为,没有水或雪,这是因为,若地面有水或雪,第一模块控制投影***在地面上显示的第一请求信息是模糊的,这会使得行人看不清第一请求信息,也就使得行人无法表达自己的道路参与意图;可以理解,第一模块确定地面满足投影条件的具体实现方式,可以根据实际应用场景设定,本申请实施例不作限定。
本申请实施例中,投影***可以为厂商预安装好在车辆中的***,也可以为用户自己安装在车辆中的***,第一模块可以通过控制该***在地面上投影第一请求信息,本申请实施例对第一模块控制投影***在地面上投影第一请求信息的实现方式,不作限定。
本申请实施例中,在地面满足投影条件的情况下,第一模块控制投影***在地面上投影第一请求信息,一种可能的实现方式为:第一模块可以基于行人的运动方向,在地面满足投影条件的情况下,控制投影***在地面上投影第一请求信息。
例如,车辆在行驶过程中,第一模块通过行人的运动方向,识别车辆的前进方向与行人的运动方向成直角,从而第一模块判断行人准备穿行马路,因此,在地面没有水的情况下,第一模块可以根据第一请求信息的显示方式,控制投影***在地面上投影第一请求信息,或者理解为,投影***指示投影装置在地面上投影第一请求信息;其中,车辆的前进方向与行人的运动方向之间的角度的具体值,也可以根据实际应用场景设定,本申请实施例不作限定。
可以理解,第一模块控制投影***在地面上投影第一请求信息的实现方式,也可以根据实际应用场景设定,本申请实施例不作限定。
S1002:第一模块控制显示设备在显示屏上显示第一请求信息。
本申请实施例中,显示设备可以为厂商预安装好在车辆中的设备,也可以为用户自己安装在车辆中的设备,例如,显示设备可以为车载显示屏等,显示设备可以控制显示屏,这样,在显示屏上显示第一请求信息后,行人可以注意到第一请求信息,这样,行人可以表达自己的道路参与意图;其中,显示屏可以为厂商预安装好在车辆外部的设备,也可以为用户自己安装在车辆外部的设备,例如,显示屏可以按照在车顶位置等。
可以理解的是,显示设备的安装位置以及显示设备的具体内容,可以根据实际应用场景设定,本申请实施例不作限定;显示屏的尺寸以及显示屏的安装位置,可以根据实际应用场景设定,本申请实施例不作限定。
本申请实施例中,第一模块控制显示设备在显示屏上显示第一请求信息的实现方式可以参考S1001的内容适应描述,在此不再赘述;可以理解,第一模块控制显示设备在显示屏上显示第一请求信息的实现方式,也可以根据实际应用场景设定,本申请实施例不作限定。
S1003:第一模块识别行人做出的动作。
本申请实施例中,S1003的内容可以参考S802的内容适应描述,在此不再赘述;可以理解,第一模块识别行人做出的动作的具体实现方式,也可以根据实际应用场景设定, 本申请实施例不作限定。
S1004:根据行人做出的动作为期望动作,第一模块确定车辆的驾驶策略。
本申请实施例中,由于期望动作与行人的道路参与意图相关联,因此,在期望动作包括第一期望动作和第二期望动作时,第一模块可以根据行人做出的动作为第一期望动作或第二期望动作,知道行人的道路参与意图,从而第一模块可以确定车辆的驾驶策略。
其中,第一期望动作与行人的第一道路参与意图相关联,第二期望动作与行人的第二道路参与意图相关联。例如,在行人做出的动作为第一期望动作时,第一道路参与意图为下一时刻行人继续行走,在行人做出的动作为第二期望动作时,第二道路参与意图为下一时刻行人停下来等待;或者,在行人做出的动作为第一期望动作时,第一道路参与意图为下一时刻行人停下来等待,在行人做出的动作为第二期望动作时,第二道路参与意图为下一时刻行人继续行走。
例如,在行人做出的动作为举左手时,举左手与第一期望动作相同,由于第一道路参与意图为下一时刻行人继续行走,因此,第一模块确定的车辆的驾驶策略为下一时刻停下来等待;在行人做出的动作为举右手时,举右手与第二期望动作相同,由于第二道路参与意图为下一时刻行人停下来等待,因此,第一模块确定的车辆的驾驶策略为继续行驶。
可以理解的是,第一期望动作或第二期望动作所指示的行人的道路参与意图的内容,也可以根据实际应用情况进行设定,本申请实施例不作限定。
本申请实施例中,结合图5所示的***,示例性的,图11为本申请实施例提供的一种意图交互的示意图,如图11可知,在决策***无法识别行人的道路参与意图的情况下,决策***可以根据第一请求信息激活投影***,并控制投影***在地面上投影第一请求信息,因此,在行人基于第一请求信息做出动作后,决策***可以指示感知***定向追踪行人以及识别行人做出的动作,感知***判断行人做出的动作为第一期望动作或第二期望动作的情况下,从而决策***可以确定行人的道路参与意图为第一道路参与意图或者第二道路参与意图,这样,决策***可以根据行人的第一道路参与意图或第二道路参与意图进行决策,并将决策结果发送给执行结构,执行机构可以为制动***或转向***等。例如,决策结果为停车等待时,则制动***控制车辆停车。
综上所述,在本申请实施例中,在地面满足投影条件的情况下,第一模块可以控制投影***在地面上投影第一请求信息,或者,第一模块控制显示设备在显示屏上显示第一请求信息,这样,第一模块通过识别行人做出的动作,由于期望动作与行人的道路参与意图相关联,因此,第一模块在行人做出的动作为期望动作的情况下,可以确定车辆的驾驶策略;这样,即使没有驾驶员的参与,行人和自动驾驶车辆也可以做到有效的意图交互,从而使得自动驾驶车辆可以有效理解行人的道路参与意图,进而,自动驾驶车辆可以做出正确的驾驶决策,这对于自动驾驶车辆的道路安全来说尤为重要。
在图8所示的实施例的基础上,示例性的,图12为本申请实施例提供的一种控制方法的流程示意图,在本申请实施例中,第一请求信息中包括用于指示多个期望动作的指示信息,多个期望动作与行人的多个道路参与意图相关联,这样,第一模块通过识别行人做出的动作,使得第一模块可以知道行人的道路参与意图,进而,第一模块可以确定车辆的驾驶策略;如图12所示,可以包括以下步骤:
S1201:第一模块控制车辆内的目标设备在目标区域显示第一请求信息。
本申请实施例中,第一请求信息中包括用于指示多个期望动作的指示信息,其中,多 个期望动作与行人的多个道路参与意图相关联,例如,在行人穿行马路时,期望动作为举左手,道路参与意图为下一时刻行人继续穿行马路;期望动作为举右手,道路参与意图为下一时刻行人停下来等待;期望动作为举起双手,道路参与意图为下一时刻行人跑着穿行马路;可以理解,期望动作与道路参与意图的具体对应关系,也可以根据实际应用场景设定,本申请实施例不作限定。
S1202:第一模块识别行人做出的动作。
本申请实施例中,由于第一请求信息中包括多个期望动作,行人做出动作后,第一模块可以通过识别行人做出的动作,在判断行人做出的动作与多个期望动作任意一个不相同时,第一模块可以执行S1203。
其中,第一模块识别行人做出的动作的具体实现方式,可以参考S802的内容适应描述,在此不再赘述。
S1203:根据行人做出的动作不为多个期望动作中任意一个,第一模块控制车辆内的目标设备在目标区域显示第二请求信息。
本申请实施例中,由于车辆在显示第一请求信息后,第一模块通过识别行人做出的动作,发现行人做出的动作不为多个期望动作中的任一个,或者,行人未执行动作,这使得第一模块无法获取行人的道路参与意图,为了避免无效地等待,车辆上的决策***可以指示投影***切换投影内容,即第一模块可以控制车辆内的目标设备在目标区域显示第二请求信息,由于第二请求信息用于指示行人做出第一道路参与行为,因此,第一模块通过显示的第二请求信,可以直接告诉行人下一时刻需要做出的动作,即行人下一时刻做出的第一道路参与行为,这样,可以有效节省无效等待时间,提高道路通行效率。
其中,第一道路参与行为可以为停下来等待,继续行走,走着穿行马路,或者,跑着穿行马路等;可以理解,第一道路参与行为的具体内容,也可以根据实际应用场景设定,本申请实施例不作限定。
可以理解的是,在显示行人在下一时刻需要做出第一道路参与行为的同时,第一模块也可以显示行人需要执行的动作,第一模块通过识别该动作,从而可以进一步地确定行人在下一时刻的道路参与行为,这样,第一模块也就可以确定车辆的驾驶意图。
本申请实施例中,第二请求信息包括文字信息、静态的图形信息、视频信息或动态的图形信息;可以理解,第二请求信息的具体内容,也可以根据实际应用场景设定,本申请实施例不作限定。
本申请实施例中,第一模块可以基于第二请求信息的显示方式,控制车辆内的目标设备在目标区域显示第二请求信息。其中,第二请求信息的显示方式包括下述的一种或多种:第二请求信息的显示位置、第二请求信息的显示角度、第二请求信息的显示亮度、第二请求信息的显示颜色或显示第二请求信息的持续时间,第二请求信息的显示方式的内容可以参考第一请求信息的显示方式的内容适应描述,在此不再赘述;可以理解,第二请求信息的显示方式的具体内容,也可以根据实际应用场景设定,本申请实施例不作限定。
可能的实现方式中,第二请求信息的显示方式里还可以包括倒计时,在车辆的驾驶策略为停车等待时,该倒计时为用于提醒行人快速通过车辆的时间,在车辆的驾驶策略为继续行驶时,该倒计时为车辆通过行人的时间,也即行人需要等待的时间。
可以理解的是,静态的图形信息或动态的图形信息也可以为警示标识,该警示标识用于表明车辆即将执行的驾驶策略,例如,警示标识为直行的交通标识,用于表明车辆即将 行驶;其中,警示标识的具体内容,可以根据实际应用场景设定,本申请实施例不作限定。
示例性的,图13为本申请实施例提供的一种第二请求信息的示意图,在本申请实施例中,第一道路参与行为是走,如图13所示,在行人的正前方,车辆控制投影***投影的是向前方向的箭头符号,该符号旁边显示了行人走的文字,无论是向前方向的箭头符号或者行人走的文字,都表明车辆告知行人需要在下一时刻行走,这样,车辆将在下一时刻停止行驶;其中,向前方向的箭头符号旁边显示的5,可以理解为,行人通过车辆的时间为5秒,或者用于告知行人,在这个5秒的时间里,行人需要快速通过车辆,这样,在5秒后,车辆才可以继续行驶。
可以理解的是,在图13中,车辆除了投影向前方向的箭头符号或行人走的文字,还可以投影举左手的动作,举左手的动作表明行人需要在下一时刻行走,这样,车辆通过识别行人举左手的动作,可以进一步地确定行人在下一时刻行走的意图,从而车辆将在下一时刻停止行驶。
示例性的,图14为本申请实施例提供的一种第二请求信息的示意图,在本申请实施例中,第一道路参与行为是不走,如图14所示,在行人的正前方,车辆控制投影***投影的是STOP的英文符号,该符号旁边显示行人不走的文字,无论是STOP的英文符号或者行人不走的文字,都表明行人在下一时刻需要停下来,这样,车辆将在下一时刻继续行驶;其中,STOP的符号旁边显示的10,可以理解为,行人停下来的时间为10秒,或者用于告知行人,在这个10秒的时间里,车辆可以快速通过行人,这样,在10秒后,行人才可以继续行走。
可以理解的是,在图14中,车辆除了投影STOP的英文符号或行人不走的文字,还可以投影举右手的动作,举右手的动作表明行人需要在下一时刻停下来,这样,车辆通过识别行人举右手的动作,可以进一步地确定行人在下一时刻停下来的意图,从而车辆将在下一时刻继续行驶。
需要说明的是,在地面满足投影条件,目标设备为投影***,目标区域为地面时,第一模块可以控制投影***在地面上投影第二请求信息;或者,在目标设备为显示设备,目标区域为显示屏时,第一模块可以控制显示设备在显示屏上显示第二请求信息。
S1204:根据行人做出第一道路参与行为,第一模块确定车辆的驾驶策略。
本申请实施例中,第一模块显示第二请求信息后,行人注意到第二请求信息,因而,行人可以基于第二请求信息做出第一道路参与行为,这样,第一模块可以确定车辆的驾驶策略。
可以理解的是,在显示第二请求信息的同时,目标区域中也可以显示行人需要做出的动作,这样,第一模块通过持续识别行人做出的动作,从而可以进一步确定行人的道路参与行为,例如,在前一时刻,车辆的驾驶策略为停车等待,在目标区域里显示的行人需要执行的动作为举左手,举左手反映的是行人在下一时刻停下来等待,因此,当车辆识别行人举左手和/或行人停下来等待时,车辆可以从停车等待状态变为行驶状态。
本申请实施例中,结合图5所示的***,示例性的,图15为本申请实施例提供的一种意图交互的示意图,如图15可知,在行人做出的动作不为多个期望动作中任意一个的情况下,决策***可以指示投影***切换投影内容,该投影内容为第二请求信息,由于第二请求信息用于指示行人做出第一道路参与行为,感知***基于第二请求信息,可以重新追踪行人以及重新识别行人的动作,从而使得决策***可以确定行人的道路参与意图,因 此,决策***可以根据行人的道路参与意图进行决策,并将决策结果发送给执行结构,执行机构可以为制动***或转向***等。例如,决策结果为停车等待时,制动***控制车辆停车。
需要说明的是,若车辆显示第二请求信息后,路边的多个行人做出了动作,在车辆无法确定行人的道路参与意图的情况下,车辆可以通过执行S1201-S1204,直到车辆确定行人的道路参与意图。
结合图8-图15所示的实施例,示例性的,图16为本申请实施例提供的一种控制方法的流程示意图,如图16所示,在决策***识别行人的意图的情况下,决策***可以根据行人的意图进行路径规划,在决策***没有识别行人的意图,且地面满足投影条件的情况下,决策***确定交互投影的内容、位置,进一步地,决策***控制投影***确定投影颜色、光照强度等信息,进一步地,投影***指示投影装置调节灯光进行投影;同时,决策***指示感知***追踪行人以及识别行人的动作,在感知***识别行人的动作与期望动作匹配的情况下,感知***可以将匹配结果发送给决策***,从而决策***根据匹配结果进行路径规划,在感知***识别行人的动作与期望动作不匹配的情况下,感知***可以通知决策***切换投影内容,即,决策***重新确定交互投影的内容、位置,投影***重新确定投影颜色、光照强度等信息,投影装置重新调节灯光,从而进行投影,这样,根据重新确定的投影内容,感知***重新追踪行人以及识别行人的动作,进而,在感知***识别行人做出的动作与期望动作匹配的情况下,感知***将匹配结果发送给决策***,从而决策***根据匹配结果进行路径规划。
需要说明的是,图16所示的调节灯光、确定交互投影的内容、位置以及确定投影颜色、光照强度等信息的内容,可以参考上述实施例所描述的第一请求信息的显示方式的内容,图16所示的重新调节灯光、重新确定交互投影的内容、位置以及重新确定投影颜色、光照强度等信息的内容,可以参考上述实施例所描述的第二请求信息的显示方式的内容,在此不再赘述。
上面结合图8-图16,对本申请实施例的方法进行了说明,下面对本申请实施例提供的执行上述方法的装置进行描述。本领域技术人员可以理解,方法和装置可以相互结合和引用,本申请实施例提供的一种控制装置可以执行上述控制方法。
下面以采用对应各个功能划分各个功能模块为例进行说明:
示例性的,如图17为本申请实施例提供的一种控制装置的结构示意图,如图17所示,该装置包括处理器1700、存储器1701和收发机1702。
处理器1700负责管理总线架构和通常的处理,存储器1701可以存储处理器1700在执行操作时所使用的数据,收发机1702用于在处理器1700的控制下接收和发送数据与存储器1701进行数据通信。
总线架构可以包括任意数量的互联的总线和桥,具体由处理器1700代表的一个或多个处理器和存储器1701代表的存储器的各种电路链接在一起。总线架构还可以将诸如***设备、稳压器和功率管理电路等之类的各种其他电路链接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口提供接口。处理器1700负责管理总线架构和通常的处理,存储器1701可以存储处理器1700在执行操作时所使用的数据。
本申请实施例揭示的流程,可以应用于处理器1700中,或者由处理器1700实现。在实现过程中,控制的流程的各步骤可以通过处理器1700中的硬件的集成逻辑电路或者软 件形式的指令完成。处理器1700可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1701,处理器1700读取存储器1701中的信息,结合其硬件完成信号处理流程的步骤。
本申请实施例中,处理器1700用于读取存储器1701中的程序并以执行上述实施例所描述的方法流程。
示例性的,图18为本申请实施例提供的另一种控制装置的结构示意图,本申请实施例提供的控制装置可以在车辆中,如图18所示,控制装置1800可以用于通信设备、电路、硬件组件或者芯片中,控制装置1800可以包括:控制单元1801、识别单元1802以及处理单元1803,其中,控制单元1801用于支持控制装置执行信息控制的步骤,识别单元1802用于支持控制装置执行信息识别的步骤,处理单元1803用于支持控制装置执行信息处理的步骤。
示例性的,控制单元1801,用于控制车辆内的目标设备在目标区域显示第一请求信息;其中,第一请求信息用于请求行人执行目标动作,目标动作用于表达行人的道路参与意图,目标区域在行人的可视范围内;识别单元1802,用于识别行人做出的动作;处理单元1803,用于根据识别的结果,确定车辆的驾驶策略。
一种可能的实现方式中,第一请求信息中包括用于指示期望动作的指示信息,期望动作与道路参与意图相关联;处理单元1803,具体用于:根据行人做出的动作为期望动作,确定车辆的驾驶策略。
一种可能的实现方式中,期望动作包括第一期望动作和第二期望动作,第一期望动作与行人的第一道路参与意图相关联,第二期望动作与行人的第二道路参与意图相关联;处理单元1803,具体用于:根据行人做出的动作为第一期望动作或第二期望动作,确定车辆的驾驶策略。
一种可能的实现方式中,第一请求信息中包括用于指示多个期望动作的指示信息,多个期望动作与多个道路参与意图相关联;控制单元1801,还用于:根据行人做出的动作不为多个期望动作中任意一个,控制车辆内的目标设备在目标区域显示第二请求信息,第二请求信息用于指示行人做出第一道路参与行为。
一种可能的实现方式中,处理单元1803,具体用于:根据行人做出第一道路参与行为,确定车辆的驾驶策略。
一种可能的实现方式中,第二请求信息包括文字信息、静态的图形信息、视频信息或动态的图形信息中的一种或者多种。
一种可能的实现方式中,第一请求信息包括文字信息、静态的图形信息、视频信息或动态的图形信息中的一种或者多种。
一种可能的实现方式中,目标设备为投影***,目标区域为车辆外部的区域。
一种可能的实现方式中,目标区域为地面,处理单元1803,具体用于:在地面满足投影条件的情况下,控制投影***在地面上投影第一请求信息。
一种可能的实现方式中,目标设备为显示设备,目标区域为显示屏,处理单元1803,具体用于:控制显示设备在显示屏上显示第一请求信息。
在一种可能的实施例中,控制装置还可以包括:存储单元1804。控制单元1801、识别单元1802、处理单元1803以及存储单元1804通过通信总线相连。
存储单元1804可以包括一个或者多个存储器,存储器可以是一个或者多个设备、电路中用于存储程序或者数据的器件。
存储单元1804可以独立存在,通过通信总线与控制装置具有的处理单元1804相连;存储单元1804也可以和控制单元1801、识别单元1802以及处理单元1804集成在一起。
示例性的,图19为本申请实施例提供的一种芯片的结构示意图。芯片190包括至少一个处理器1910和通信接口1930。通信接口1930用于从外部向芯片190输入数据,或者从芯片190向外部输出数据。处理器1910用于运行计算机程序或指令,以实现上述各方法实施例。
可选地,芯片190包括存储器1940。在一些实施方式中,存储器1940存储了如下的元素:可执行模块或者数据结构,或者他们的子集,或者他们的扩展集。
本申请实施例中,存储器1940可以包括只读存储器和随机存取存储器,并向处理器1910提供指令和数据。存储器1940的一部分还可以包括非易失性随机存取存储器(non-volatile random access memory,NVRAM)。
本申请实施例中,处理器1910可以通过调用存储器1940存储的操作指令,控制决策***、感知***、投影***或投影装置执行上述方法实施例中相应的操作。
例如,结合图5,存储器1940存储的操作指令可以为控制决策***的指令,这样,处理器1910通过从存储器1940调取该指令,从而处理器1910可以控制决策***,进而,决策***可以指示感知***感知行人的信息,或者,决策***可以激活投影***,进一步地,投影***控制投影装置投影第一请求信息或第二请求信息。
本申请实施例中,存储器1940、通信接口1930以及存储器1940通过总线***1919耦合在一起。其中,总线***1919除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。为了便于描述,在图19中将各种总线都标为总线***1919。
结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。其中,软件模块可以位于随机存储器、只读存储器、可编程只读存储器或带电可擦写可编程存储器(electrically erasable programmable read only memory,EEPROM)等本领域成熟的存储介质中。该存储介质位于存储器1940,处理器1910读取存储器1940中的信息,结合其硬件完成上述方法的步骤。
在上述实施例中,存储器存储的供处理器执行的指令可以以计算机程序产品的形式实现。其中,计算机程序产品可以是事先写入在存储器中,也可以是以软件形式下载并安装在存储器中。
计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户 线(digital subscriber line,DSL)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包括一个或多个可用介质集成的服务器、数据中心等数据存储设备。例如,可用介质可以包括磁性介质(例如,软盘、硬盘或磁带)、光介质(例如,数字通用光盘(digital versatile disc,DVD))、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
本申请实施例还提供了一种计算机可读存储介质。上述实施例中描述的方法可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。计算机可读介质可以包括计算机存储介质和通信介质,还可以包括任何可以将计算机程序从一个地方传送到另一个地方的介质。存储介质可以是可由计算机访问的任何目标介质。
作为一种可能的设计,计算机可读介质可以包括紧凑型光盘只读储存器(compact disc read-only memory,CD-ROM)、RAM、ROM、EEPROM或其它光盘存储器;计算机可读介质可以包括磁盘存储器或其它磁盘存储设备。而且,任何连接线也可以被适当地称为计算机可读介质。例如,如果使用同轴电缆,光纤电缆,双绞线,DSL或无线技术(如红外,无线电和微波)从网站,服务器或其它远程源传输软件,则同轴电缆,光纤电缆,双绞线,DSL或诸如红外,无线电和微波之类的无线技术包括在介质的定义中。如本文所使用的磁盘和光盘包括光盘(CD),激光盘,光盘,数字通用光盘(digital versatile disc,DVD),软盘和蓝光盘,其中磁盘通常以磁性方式再现数据,而光盘利用激光光学地再现数据。上述的组合也应包括在计算机可读介质的范围内。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。

Claims (25)

  1. 一种控制方法,其特征在于,所述方法包括:
    控制车辆内的目标设备在目标区域显示第一请求信息;其中,所述第一请求信息用于请求行人执行目标动作,所述目标动作用于表达所述行人的道路参与意图,所述目标区域在所述行人的可视范围内;
    识别所述行人做出的动作;
    根据所述识别的结果,确定所述车辆的驾驶策略。
  2. 根据权利要求1所述的方法,其特征在于,所述第一请求信息用于请求行人执行目标动作,包括:
    所述第一请求信息中包括用于指示期望动作的指示信息,所述期望动作与道路参与意图相关联;
    所述根据所述识别的结果,确定所述车辆的驾驶策略,包括:
    根据所述行人做出的动作为所述期望动作,确定所述车辆的驾驶策略。
  3. 根据权利要求2所述的方法,其特征在于,所述期望动作包括第一期望动作和第二期望动作,所述第一期望动作与所述行人的第一道路参与意图相关联,所述第二期望动作与所述行人的第二道路参与意图相关联;
    所述根据所述识别的结果,确定所述车辆的驾驶策略,包括:
    根据所述行人做出的动作为所述第一期望动作或所述第二期望动作,确定所述车辆的驾驶策略。
  4. 根据权利要求1所述的方法,其特征在于,所述第一请求信息用于请求行人执行目标动作,包括:
    所述第一请求信息中包括用于指示多个期望动作的指示信息,所述多个期望动作与多个道路参与意图相关联;
    所述方法还包括:
    根据所述行人做出的动作不为所述多个期望动作中任意一个,控制车辆内的目标设备在目标区域显示第二请求信息,所述第二请求信息用于指示所述行人做出第一道路参与行为。
  5. 根据权利要求4所述的方法,其特征在于,根据所述识别的结果,确定所述车辆的驾驶策略,包括:
    根据所述行人做出第一道路参与行为,确定所述车辆的驾驶策略。
  6. 根据权利要求4或5所述的方法,其特征在于,所述第二请求信息包括文字信息、静态的图形信息、视频信息或动态的图形信息中的一种或者多种。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述第一请求信息包括文字信息、静态的图形信息、视频信息或动态的图形信息中的一种或者多种。
  8. 根据权利要求7所述的方法,其特征在于,所述目标设备为投影***,所述目标区域为所述车辆外部的区域。
  9. 根据权利要求8所述的方法,其特征在于,所述目标区域为地面,所述控制车辆内的目标设备在目标区域显示第一请求信息,包括:
    在所述地面满足投影条件的情况下,控制所述投影***在所述地面上投影所述第一请求信息。
  10. 根据权利要求7所述的方法,其特征在于,所述目标设备为显示设备,所述目标区域为显示屏,所述控制车辆内的目标设备在目标区域显示第一请求信息,包括:
    控制所述显示设备在所述显示屏上显示所述第一请求信息。
  11. 一种控制装置,其特征在于,所述装置包括控制单元、识别单元和处理单元;
    所述控制单元,用于控制车辆内的目标设备在目标区域显示第一请求信息;其中,所述第一请求信息用于请求行人执行目标动作,所述目标动作用于表达所述行人的道路参与意图,所述目标区域在所述行人的可视范围内;
    所述识别单元,用于识别所述行人做出的动作;
    所述处理单元,用于根据所述识别的结果,确定所述车辆的驾驶策略。
  12. 根据权利要求11所述的装置,其特征在于,所述第一请求信息中包括用于指示期望动作的指示信息,所述期望动作与所述道路参与意图相关联;所述处理单元,具体用于:根据所述行人做出的动作为所述期望动作,确定所述车辆的驾驶策略。
  13. 根据权利要求12所述的装置,其特征在于,所述期望动作包括第一期望动作和第二期望动作,所述第一期望动作与所述行人的第一道路参与意图相关联,所述第二期望动作与所述行人的第二道路参与意图相关联;所述处理单元,具体用于:根据所述行人做出的动作为所述第一期望动作或所述第二期望动作,确定所述车辆的驾驶策略。
  14. 根据权利要求11所述的装置,其特征在于,所述第一请求信息中包括用于指示多个期望动作的指示信息,所述多个期望动作与多个道路参与意图相关联;所述控制单元,还用于:根据所述行人做出的动作不为所述多个期望动作中任意一个,控制车辆内的目标设备在目标区域显示第二请求信息,所述第二请求信息用于指示所述行人做出第一道路参与行为。
  15. 根据权利要求14所述的装置,其特征在于,所述处理单元,具体用于:根据所述行人做出第一道路参与行为,确定所述车辆的驾驶策略。
  16. 根据权利要求14或15所述的装置,其特征在于,所述第二请求信息包括文字信息、静态的图形信息、视频信息或动态的图形信息中的一种或者多种。
  17. 根据权利要求11-16任一项所述的装置,其特征在于,所述第一请求信息包括文字信息、静态的图形信息、视频信息或动态的图形信息中的一种或者多种。
  18. 根据权利要求17所述的装置,其特征在于,所述目标设备为投影***,所述目标区域为所述车辆外部的区域。
  19. 根据权利要求18所述的装置,其特征在于,所述目标区域为地面,所述控制单元,具体用于:在所述地面满足投影条件的情况下,控制所述投影***在所述地面上显示所述第一请求信息。
  20. 根据权利要求17所述的装置,其特征在于,所述目标设备为显示设备,所述目标区域为显示屏,所述控制单元,具体用于:控制所述显示设备在所述显示屏上显示所述第一请求信息。
  21. 一种控制装置,其特征在于,包括存储器和处理器,所述存储器存储计算机程序指令,所述处理器运行所述计算机程序指令以执行权利要求1-10中任一项所述的方法。
  22. 一种车辆,其特征在于,包括如权利要求11-20中任一项所述的装置。
  23. 根据权利要求22所述的车辆,其特征在于,还包括感知***、目标设备,所述目标设备为投影***或者显示设备。
  24. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有指令,当所述指令被运行时,实现如权利要求1-10中任一项所述的方法。
  25. 一种计算机程序产品,其特征在于,当所述计算机程序产品在处理器上运行时,使得处理器执行权利要求1-10中任一项所述的方法。
PCT/CN2022/093988 2021-05-25 2022-05-19 控制方法和装置 WO2022247733A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22810460.0A EP4331938A1 (en) 2021-05-25 2022-05-19 Control method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110574297.2A CN115384545A (zh) 2021-05-25 2021-05-25 控制方法和装置
CN202110574297.2 2021-05-25

Publications (1)

Publication Number Publication Date
WO2022247733A1 true WO2022247733A1 (zh) 2022-12-01

Family

ID=84113846

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/093988 WO2022247733A1 (zh) 2021-05-25 2022-05-19 控制方法和装置

Country Status (3)

Country Link
EP (1) EP4331938A1 (zh)
CN (1) CN115384545A (zh)
WO (1) WO2022247733A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117755323A (zh) * 2023-10-24 2024-03-26 清华大学 基于行人车辆动态交互的安全策略确定方法和装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106428000A (zh) * 2016-09-07 2017-02-22 清华大学 一种车辆速度控制装置和方法
US20170240098A1 (en) * 2016-02-22 2017-08-24 Uber Technologies, Inc. Lighting device for a vehicle
US20180261081A1 (en) * 2017-03-10 2018-09-13 Subaru Corporation Image display device
CN109455180A (zh) * 2018-11-09 2019-03-12 百度在线网络技术(北京)有限公司 用于控制无人车的方法和装置
CN110077314A (zh) * 2019-04-03 2019-08-02 浙江吉利控股集团有限公司 一种无人驾驶车辆的信息交互方法、***及电子设备
DE102019134048A1 (de) * 2019-12-11 2020-03-26 FEV Group GmbH Verfahren zur Vorhersage eines Verhaltens von Fußgängern
CN111540222A (zh) * 2020-04-17 2020-08-14 新石器慧通(北京)科技有限公司 基于无人车的智能交互方法、装置及无人车
US20200342757A1 (en) * 2018-01-29 2020-10-29 Kyocera Corporation Image processing apparatus, imaging apparatus, moveable body, and image processing method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170240098A1 (en) * 2016-02-22 2017-08-24 Uber Technologies, Inc. Lighting device for a vehicle
CN106428000A (zh) * 2016-09-07 2017-02-22 清华大学 一种车辆速度控制装置和方法
US20180261081A1 (en) * 2017-03-10 2018-09-13 Subaru Corporation Image display device
US20200342757A1 (en) * 2018-01-29 2020-10-29 Kyocera Corporation Image processing apparatus, imaging apparatus, moveable body, and image processing method
CN109455180A (zh) * 2018-11-09 2019-03-12 百度在线网络技术(北京)有限公司 用于控制无人车的方法和装置
CN110077314A (zh) * 2019-04-03 2019-08-02 浙江吉利控股集团有限公司 一种无人驾驶车辆的信息交互方法、***及电子设备
DE102019134048A1 (de) * 2019-12-11 2020-03-26 FEV Group GmbH Verfahren zur Vorhersage eines Verhaltens von Fußgängern
CN111540222A (zh) * 2020-04-17 2020-08-14 新石器慧通(北京)科技有限公司 基于无人车的智能交互方法、装置及无人车

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117755323A (zh) * 2023-10-24 2024-03-26 清华大学 基于行人车辆动态交互的安全策略确定方法和装置

Also Published As

Publication number Publication date
CN115384545A (zh) 2022-11-25
EP4331938A1 (en) 2024-03-06

Similar Documents

Publication Publication Date Title
JP7395529B2 (ja) 自律走行車の予測のテスト
KR102446698B1 (ko) 폐색들이 있는 도로 사용자 반응 모델링에 따른 자율주행 차량의 동작
KR102060070B1 (ko) 자동주차장치 및 이의 제어방법
WO2022007655A1 (zh) 一种自动换道方法、装置、设备及存储介质
KR102120108B1 (ko) 자율주행 차량 및 그 제어 방법
EP4030377A1 (en) Responder oversight system for an autonomous vehicle
US10522041B2 (en) Display device control method and display device
US10768618B2 (en) Vehicle driving control apparatus and vehicle driving method
EP3995379B1 (en) Behavior prediction for railway agents for autonomous driving system
US11537128B2 (en) Detecting and responding to processions for autonomous vehicles
US20230031375A1 (en) Pedestrian intent yielding
WO2022247733A1 (zh) 控制方法和装置
WO2024122303A1 (ja) 車両制御装置、車両制御方法
JP2023022944A (ja) 走行制御装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22810460

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022810460

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022810460

Country of ref document: EP

Effective date: 20231128

NENP Non-entry into the national phase

Ref country code: DE