WO2024088337A1 - 一种控制方法、装置和运载工具 - Google Patents

一种控制方法、装置和运载工具 Download PDF

Info

Publication number
WO2024088337A1
WO2024088337A1 PCT/CN2023/126760 CN2023126760W WO2024088337A1 WO 2024088337 A1 WO2024088337 A1 WO 2024088337A1 CN 2023126760 W CN2023126760 W CN 2023126760W WO 2024088337 A1 WO2024088337 A1 WO 2024088337A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
prompt
vehicle
image drift
controlled
Prior art date
Application number
PCT/CN2023/126760
Other languages
English (en)
French (fr)
Inventor
赵阳
邓家钰
马瑞
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024088337A1 publication Critical patent/WO2024088337A1/zh

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q5/00Arrangement or adaptation of acoustic signal devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q9/00Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control

Definitions

  • Embodiments of the present application relate to the field of human-computer interaction, and more specifically, to a control method, device, and vehicle.
  • the current driver After the current driver performs certain operations in the vehicle, he still needs to confirm the validity of the interaction between the person and the vehicle through the image or text information on the instrument display or the central control screen. For example, after the driver switches the vehicle's gear from the parking gear (P gear) to the forward gear (D gear), he still needs to confirm that the vehicle has switched to the D gear through the text information "D" displayed on the instrument display. This increases the driver's cognitive load on the human-computer interaction results and affects the user's human-computer interaction experience.
  • the embodiments of the present application provide a control method, device and vehicle, which help reduce the user's cognitive load on human-computer interaction results, thereby helping to improve the user's human-computer interaction experience.
  • the vehicles in this application may include road vehicles, water vehicles, air vehicles, industrial equipment, agricultural equipment, or entertainment equipment, etc.
  • the vehicle may be a vehicle, which is a vehicle in a broad sense, and may be a vehicle (such as a commercial vehicle, a passenger car, a motorcycle, a flying car, a train, etc.), an industrial vehicle (such as a forklift, a trailer, a tractor, etc.), an engineering vehicle (such as an excavator, a bulldozer, a crane, etc.), agricultural equipment (such as a mower, a harvester, etc.), amusement equipment, a toy vehicle, etc.
  • the embodiments of this application do not specifically limit the type of vehicle.
  • the vehicle may be a vehicle such as an airplane or a ship.
  • a control method comprising: acquiring sensor information; and according to the sensor information, controlling a sound-emitting device at at least two different positions in the vehicle to emit a sound image (sound image, or soundstage) drift prompt sound, wherein the prompt sound is used to prompt a state change of the vehicle.
  • the vehicle can prompt the user of the vehicle state change through the sound and image drift prompt sound.
  • the user can intuitively understand the human-computer interaction result and the vehicle state change through the sound and image drift prompt sound, which helps to reduce the user's cognitive load on the human-computer interaction result and helps to improve the user's human-computer interaction experience.
  • Sound image can be used to express one or more of the depth, height and width of the sound emitted by the sound-emitting device.
  • Sound image shift can be, for example, a change or movement of the sound image position of the sound in a certain direction within a certain time interval. In this way, the prompt sound of sound image shift can give the user an experience of the change of the spatial position of the sound.
  • the vehicle when the user performs human-computer interaction input, the vehicle can obtain corresponding sensor information, so that the vehicle can provide the user with a prompt sound of spatial position change feedback, so that the user can clearly understand that the state of the vehicle is changing or that the vehicle has responded to the user's human-computer interaction input. This helps to improve the user's human-computer interaction experience and also helps to improve the intelligence and technological sense of the vehicle.
  • the acquiring of sensor information includes: acquiring sensor information collected by a sensor.
  • the senor includes a physical button or a sensor corresponding to a physical button, a touch sensor, a voice sensor, a visual sensor, etc.
  • the sound-generating device at at least two different positions in the vehicle is controlled to emit a prompt sound indicating that the sound image drifts, which can also be understood as controlling the sound-generating device at at least two different positions in the vehicle to emit a prompt sound indicating that the sound image can drift.
  • the method further includes: controlling a sound image drift direction of the prompt sound according to the sensing information.
  • the drift direction of the sound image of the prompt sound can be controlled by the sensor information.
  • the drift direction of the sound image of the prompt sound corresponds to the user input, which can make the user further understand the human-computer interaction result and the state change of the vehicle, and help improve the user's human Machine interaction experience.
  • the sensor information is used to indicate adjustment from a first gear to a second gear
  • the sound and image drift direction of the prompt sound is controlled based on the sensor information, including: controlling the sound and image drift direction of the prompt sound to be a first direction, and the first direction is from the first gear to the second gear.
  • the sound image drift direction of the prompt sound can be controlled to be the first direction.
  • the user does not need to use other equipment or devices to confirm the gear change of the vehicle, which helps to reduce the cognitive load of the user when performing the gear shift operation, thereby helping to improve the user's human-computer interaction experience when performing the gear shift operation.
  • the gear position of the vehicle is a knob gear position
  • the sensor information is used to indicate the rotation direction of adjusting from the first gear position to the second gear position
  • the sound and image drift direction of the prompt sound is controlled based on the sensor information, including: controlling the sound and image drift direction of the prompt sound to be the rotation direction.
  • the gear position of the vehicle is hand-held gear
  • the sensor information is used to indicate the direction of a first lever
  • the sound and image drift direction of the prompt sound is controlled based on the sensor information, including: controlling the sound and image drift direction of the prompt sound to be the direction of the first lever.
  • the sensor information is used to indicate whether the first function is activated or deactivated, and the sound and image drift direction of the prompt sound is controlled based on the sensor information, including: controlling the sound and image drift direction of the prompt sound to be a second direction, and the second direction is the normal to the plane where the vehicle is located.
  • the user can clearly know whether the first function is turned on or off. In this way, the user does not need to use other devices or apparatuses to confirm whether the first function is turned on or off, which helps to reduce the user's cognitive load on the interaction result of turning on or off the first function, thereby helping to improve the user's human-computer interaction experience.
  • the sound and image drift direction of the prompt sound is controlled to be the second direction, including: when the sensor information is used to indicate that the first function is turned on, the sound and image drift direction of the prompt sound is controlled to be the positive direction of the second direction; or, when the sensor information is used to indicate that the first function is turned off, the sound and image drift direction of the prompt sound is controlled to be the negative direction of the second direction.
  • the sensor information is used to indicate a turn to a third direction
  • the controlling the sound and image drift direction of the prompt sound based on the sensor information includes: controlling the sound and image drift direction of the prompt sound to be the third direction
  • the user can clearly know that the vehicle has responded to the user's steering operation. In this way, the user does not need to confirm the interactive result of the vehicle to the steering operation through the prompt information on the instrument display screen, which helps to reduce the user's cognitive load on the interactive result of the steering operation, thereby helping to improve the user's human-computer interaction experience when controlling the steering of the vehicle.
  • the method further includes: controlling the direction in which the ambient light is lit, the direction in which the ambient light is lit corresponding to the direction in which the sound and image drift, and the vehicle includes the ambient light.
  • the user by controlling the lighting direction of the atmosphere light to correspond to the direction of the sound and image drift, the user can further understand the human-computer interaction results and the state changes of the vehicle, which helps to enhance the user's human-computer interaction experience.
  • the sensor information is used to indicate whether the mobile terminal has successfully or failed to charge in a wireless charging area in a vehicle, and the sound and image drift direction of the prompt sound is controlled based on the sensor information, including: controlling the sound and image drift direction of the prompt sound to be the second direction.
  • the sound and image drift direction of the prompt sound is controlled based on the sensor information, including: when the sensor information indicates that the mobile terminal is successfully charged in the wireless charging area in the vehicle, the sound and image drift direction of the prompt sound is controlled to be in the positive direction of the second direction; or, when the sensor information indicates that the mobile terminal fails to charge in the wireless charging area in the vehicle, the sound and image drift direction of the prompt sound is controlled to be in the negative direction of the second direction.
  • the sensor information is used to indicate success or failure of the wireless connection between the mobile terminal and the vehicle, and the sound and image drift direction of the prompt sound is controlled based on the sensor information, including: controlling the sound and image drift direction of the prompt sound to be the second direction.
  • the wireless connection between the mobile terminal and the vehicle includes a Bluetooth connection.
  • the method further includes: controlling a sound image drift speed of the prompt sound according to the sensing information.
  • the sound image drift speed of the prompt sound can be controlled by the sensor information.
  • the sound image drift speed of the prompt sound simulates the speed of the user input indicated by the sensor information, which helps to further improve the user's human-computer interaction experience.
  • the sensor information is used to indicate the change in the opening degree of the accelerator pedal or the brake pedal, and the sound and image drift speed of the prompt sound is controlled based on the sensor information, including: controlling the sound and image drift speed of the prompt sound according to the change in the opening degree of the accelerator pedal or the brake pedal.
  • the sound image drift speed of the prompt sound is controlled by changing the opening of the accelerator pedal or the brake pedal.
  • different acceleration states of the vehicle are simulated by different sound image drift speeds, which helps to improve the user's human-computer interaction experience when controlling the acceleration or deceleration of the vehicle.
  • the sensor information is used to indicate a user's sliding input on a display screen, and the sound and image drift speed of the prompt sound is controlled based on the sensor information, including: controlling the sound and image drift speed of the prompt sound based on the speed of the sliding input.
  • the speed of the sound image drift of the prompt sound is controlled by the speed of the user's sliding input on the display screen.
  • the speed of the user's sliding input on the display screen can correspond to the sound image drift speed, which helps to improve the human-computer interaction experience when the user performs sliding input on the display screen.
  • the method further includes: determining the at least two sound emitting devices according to the area where the user is located.
  • the at least two sound-emitting devices can be determined by the area where the user is located.
  • the sound-emitting devices at at least two different positions in the area where the user is located can be controlled to emit a prompt sound of sound image drift, thereby further allowing the user to clearly understand the human-computer interaction results and the state changes of the vehicle in the area where he is located, which helps to improve the user's human-computer interaction experience.
  • the vehicle includes a mapping relationship between user input and a sound-emitting device
  • the method further includes: determining the at least two sound-emitting devices based on the mapping relationship and the sensor information.
  • the at least two sound-generating devices can be determined by the mapping relationship between the user input and the sound-generating device and the sensor information. In this way, the calculation overhead when determining the at least two sound-generating devices can be reduced, which helps to save the power consumption of the vehicle.
  • An embodiment of the present application also provides a control method, which includes: when detecting a user's input to a vehicle, controlling a sound-emitting device at at least two different positions in the vehicle to emit a sound and image drift prompt sound, wherein the prompt sound is used to prompt the vehicle to perform an operation corresponding to the input.
  • a control method comprising: acquiring sensor information; according to the sensor information, controlling the sound-emitting devices at at least two different positions in the vehicle to emit a sound and image drift prompt sound, the prompt sound being used to prompt a state change of the environment in which the vehicle is located, or the prompt sound being used to prompt a relative position of an object to be prompted, or the prompt sound being used to prompt a result of biometric recognition of the user, or the prompt sound being used to prompt a connection status between the vehicle and a mobile terminal, or the prompt sound being used to prompt success or failure of wireless charging of the vehicle to the mobile terminal.
  • the sensor information is used to indicate changes in traffic lights in the environment of the vehicle, and the prompt sound is used to prompt the change in the traffic light, or the prompt sound is used to prompt the user to drive the vehicle through the intersection.
  • the method when the traffic light at the intersection where the vehicle is located indicates that vehicles in a certain lane can pass through the intersection, the method also includes: controlling the sound and image drift direction of the prompt sound to be the direction in which the vehicle is moving.
  • the sound-emitting devices at at least two different positions in the vehicle are controlled to emit a sound warning sound of sound and image drift, including: when no vehicle movement is detected within a preset time from the time when the traffic light indicates that vehicles in a certain lane can pass through the intersection, the sound-emitting devices at at least two different positions in the vehicle are controlled to emit a sound warning sound of sound and image drift.
  • a prompt sound with drifting sound and image can be used to prompt the user to quickly pass through the intersection, which helps to avoid traffic congestion caused by long waiting time and also helps to save the user's travel time.
  • the sensor information is used to indicate that the vehicle is in a congested road section and another vehicle in front of the vehicle is moving forward, and the prompt sound is used to prompt the other vehicle to move forward.
  • the method further includes: controlling the sound image drift direction of the prompt sound to be the moving direction of the vehicle.
  • the sound-emitting devices at at least two different positions in the vehicle are controlled to emit a sound prompt sound of sound and image drift, including: when it is detected that the other vehicle is moving forward and the distance between the vehicle and the other vehicle is greater than or equal to a preset distance, the sound-emitting devices at at least two different positions in the vehicle are controlled to emit a sound prompt sound of sound and image drift.
  • the vehicle when the vehicle is in a congested section and the vehicle in front is moving, if the user's attention is not on the traffic light, the user can be prompted to follow the vehicle in front through a drifting sound prompt, which helps to avoid being pressed on the congested section.
  • the senor information is used to indicate that a user in a certain area of the vehicle is not wearing a seat belt
  • the method also includes: controlling the sound and image drift direction of the prompt sound to be from the area where the user is located to the area where the driver is located, or from the area where the driver is located to the area where the user is located.
  • the driver when a user in a vehicle does not wear a seat belt, the driver can be notified by a sound prompt with a drifting sound.
  • the area where the user who is not wearing a seat belt is located can help the driver remind the user to fasten the seat belt in time, which helps to improve the user's driving experience.
  • a control device which includes: an acquisition unit for acquiring sensor information; and a control unit for controlling a sound-emitting device at at least two different positions in the vehicle to emit a sound prompt sound of sound-image drift according to the sensor information, wherein the sound prompt sound is used to prompt a change in the state of the vehicle.
  • control unit is used to: control the sound image drift direction of the prompt sound according to the sensing information.
  • the sensor information is used to indicate adjustment from a first gear to a second gear
  • the control unit is used to: control the sound and image drift direction of the prompt sound to be a first direction, and the first direction is from the first gear to the second gear.
  • the sensor information is used to indicate whether the first function is activated or deactivated
  • the control unit is used to: control the sound and image drift direction of the prompt sound to be in a second direction, and the second direction is the normal to the plane where the vehicle is located.
  • the sensor information is used to indicate turning in a third direction
  • the control unit is used to: control the sound and image drift direction of the prompt sound to be in the third direction.
  • control unit is further used to: control the direction in which the ambient light is lit, the direction in which the ambient light is lit corresponds to the direction in which the sound and image drift, and the vehicle includes the ambient light.
  • control unit is used to: control the sound and image drift speed of the prompt sound according to the sensor information.
  • the sensor information is used to indicate the change in the opening of the accelerator pedal or the brake pedal
  • the control unit is used to: control the sound and image drift speed of the prompt sound according to the change in the opening of the accelerator pedal or the brake pedal.
  • the sensing information is used to indicate a sliding input by a user on a display screen
  • the control unit is used to control a sound and image drift speed of the prompt sound according to a speed of the sliding input.
  • the device further includes: a first determination unit, configured to determine the at least two sound emitting devices according to an area where the user is located.
  • the vehicle includes a mapping relationship between user input and the sound-emitting device, and the device also includes: a second determination unit for determining the at least two sound-emitting devices based on the mapping relationship and the sensor information.
  • a control device which includes: an acquisition unit for acquiring sensor information; a control unit for controlling a sound-emitting device at least two different positions in the vehicle to emit a sound prompt sound of sound and image drift according to the sensor information, wherein the prompt sound is used to prompt a state change of the environment in which the vehicle is located, or the prompt sound is used to prompt a relative position of an object to be prompted, or the prompt sound is used to prompt a biometric recognition result of the user.
  • a control device which includes a processing unit and a storage unit, wherein the storage unit is used to store instructions, and the processing unit executes the instructions stored in the storage unit to enable the control device to perform any possible method in the first aspect or the second aspect.
  • a control system comprising at least two sound-emitting devices and a computing platform, wherein the computing platform comprises any possible device in the third aspect or the fourth aspect, or the computing platform comprises the device described in the fifth aspect.
  • control system further includes one or more sensors.
  • a vehicle which includes any possible device in the third aspect, or includes the device described in the fourth aspect, or includes the device described in the fifth aspect, or includes the control system described in the sixth aspect.
  • the vehicle is a vehicle.
  • a computer program product comprising: a computer program code, when the computer program code is run on a computer, the computer executes any possible method in the first aspect or the second aspect.
  • the above-mentioned computer program code can be stored in whole or in part on the first storage medium, wherein the first storage medium can be packaged together with the processor or separately packaged with the processor, and the embodiments of the present application do not specifically limit this.
  • a computer-readable medium stores a program code, and when the computer program code runs on a computer, the computer executes any possible method in the first aspect or the second aspect.
  • an embodiment of the present application provides a chip system, which includes a processor for calling a computer program or computer instructions stored in a memory so that the processor executes any possible method in the first aspect or the second aspect above.
  • the processor is coupled to the memory through an interface.
  • the chip system also includes a memory, in which a computer program or computer instructions are stored.
  • the state change of the vehicle can be prompted to the user through the prompt sound of the sound image drift.
  • the user can intuitively understand the human-computer interaction result and the state change of the vehicle through the prompt sound of the sound image drift, without the user needing to use other equipment or devices to determine the human-computer interaction result, which helps to reduce the user's cognitive load on the human-computer interaction result, thereby helping to improve the user's human-computer interaction experience.
  • the sensor information is used to control the drift direction of the sound image of the prompt sound, so that the drift direction of the sound image of the prompt sound corresponds to the user input.
  • the user can further understand the human-computer interaction results and the state changes of the vehicle, which helps to improve the user's human-computer interaction experience.
  • the sound and image drift speed of the prompt sound is controlled by the sensor information, so that the sound and image drift speed of the prompt sound simulates the speed of the user input indicated by the sensor information, which helps to further improve the user's human-computer interaction experience.
  • the at least two sound-emitting devices can be determined by the area where the user is located.
  • the sound image drift prompt sounds emitted by the sound-emitting devices at at least two different positions in the area where the user is located can be controlled, so that the user can further understand the human-computer interaction results and the state changes of the vehicle in the area where the user is located, which helps to improve the user's human-computer interaction experience.
  • the at least two sound-generating devices are determined by the mapping relationship between the user input and the sound-generating device and the sensor information. In this way, the calculation cost when determining the at least two sound-generating devices can be reduced, which helps to save the power consumption of the vehicle.
  • FIG1 is a functional block diagram of a vehicle provided in an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 3 is another schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 4 is another schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 5 is another schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 6 is another schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 7 is another schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 8 is another schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 9 is another schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 10 is another schematic diagram of an application scenario provided in an embodiment of the present application.
  • FIG. 11 is another schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 12 is another schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of speaker distribution in a vehicle provided in an embodiment of the present application.
  • FIG. 14 is a schematic flow chart of a control method provided in an embodiment of the present application.
  • FIG. 15 is another schematic flow chart of the control method provided in an embodiment of the present application.
  • FIG. 16 is a schematic block diagram of a control device provided in an embodiment of the present application.
  • FIG. 17 is a schematic block diagram of a control system provided in an embodiment of the present application.
  • prefixes such as “first” and “second” are used only to distinguish different description objects, and have no limiting effect on the position, order, priority, quantity or content of the described objects.
  • the use of prefixes such as ordinal numbers to distinguish description objects in the embodiments of the present application does not constitute a limitation on the described objects.
  • the meaning of "multiple" is two or more.
  • the current driver performs certain operations in the vehicle, he still needs to confirm the validity of the interaction between the person and the vehicle through the image or text information on the instrument display or the central control screen. For example, after the driver switches the vehicle's gear from P to D, he still needs to confirm that the vehicle has switched to D through the text information "D" displayed on the instrument display. This increases the driver's recognition of the human-machine interaction results.
  • the cognitive load affects the user's human-computer interaction experience.
  • the embodiments of the present application provide a control method, device and vehicle, which can prompt the user of the state change of the vehicle through the prompt sound of the sound image drift according to the acquired sensor information.
  • the user can intuitively understand the human-computer interaction result and the state change of the vehicle through the prompt sound, which helps to improve the user's human-computer interaction experience.
  • FIG1 is a functional block diagram of a vehicle 100 provided in an embodiment of the present application.
  • the vehicle 100 may include a perception system 120, a display device 130, a sound device 140, and a computing platform 150, wherein the perception system 120 may include one or more sensors for sensing information about the environment surrounding the vehicle 100.
  • the perception system 120 may include a positioning system, and the positioning system may be a global positioning system (GPS), or a Beidou system or other positioning systems.
  • the perception system 120 may also include one or more of an inertial measurement unit (IMU), a laser radar, a millimeter wave radar, an ultrasonic radar, and a camera device.
  • IMU inertial measurement unit
  • the computing platform 150 may include one or more processors, such as processors 151 to 15n (n is a positive integer).
  • the processor is a circuit with signal processing capability.
  • the processor may be a circuit with instruction reading and execution capability, such as a central processing unit (CPU), a microprocessor, a graphics processing unit (GPU) (which can be understood as a microprocessor), or a digital signal processor (DSP); in another implementation, the processor may implement certain functions through the logical relationship of a hardware circuit, and the logical relationship of the hardware circuit is fixed or reconfigurable, such as a processor that is a hardware circuit implemented by an application-specific integrated circuit (ASIC) or a programmable logic device (PLD), such as a field programmable gate array (FPGA).
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • the process of the processor loading a configuration document to implement the hardware circuit configuration can be understood as the process of the processor loading instructions to implement the functions of some or all of the above units.
  • the processor can also be a hardware circuit designed for artificial intelligence, which can be understood as an ASIC, such as a neural network processing unit (NPU), a tensor processing unit (TPU), a deep learning processing unit (DPU), etc.
  • the computing platform 150 can also include a memory, the memory is used to store instructions, and some or all of the processors 151 to 15n can call the instructions in the memory and execute the instructions to implement the corresponding functions.
  • the display device 130 in the cockpit is mainly divided into two categories.
  • the first category is the vehicle display screen;
  • the second category is the projection display screen, such as a head-up display (HUD).
  • the vehicle display screen is a physical display screen and an important part of the vehicle infotainment system.
  • There can be multiple display screens in the cockpit such as a digital instrument display screen, a central control screen, a display screen in front of the passenger in the co-pilot seat (also called the front passenger), a display screen in front of the left rear passenger, and a display screen in front of the right rear passenger. Even the window can be used as a display screen for display.
  • Head-up display also known as a head-up display system.
  • HUD includes, for example, a combined head-up display (C-HUD) system, a windshield head-up display (W-HUD) system, and an augmented reality head-up display system (AR-HUD).
  • C-HUD head-up display
  • W-HUD windshield head-up display
  • AR-HUD augmented reality head-up display system
  • the sound-generating device 140 may be a speaker, a sound box, or a horn.
  • FIG2 shows a schematic diagram of an application scenario provided by an embodiment of the present application.
  • the vehicle when the sensor information collected by the gear sensor determines that the user performs an operation of switching from D gear to P gear, the vehicle can control speakers 201 and 202 to emit a sound and image drift prompt sound, which is used to prompt that the gear status of the vehicle has changed.
  • the computing platform 150 may obtain the gear information collected by the gear sensor, which is used to indicate the gear shift from the D gear to the P gear.
  • the computing platform 150 may control the gear shift of the vehicle to the P gear and control the speakers 201 and 202 to emit a sound prompt of the image drift according to the gear information.
  • the vehicle when it is determined through the sensor information collected by the gear position sensor that the user has performed an operation of switching from the D gear to the P gear, the vehicle can control the speaker 201 and the speaker 202 to emit a sound and image drift prompt sound, including: when it is determined through the sensor information collected by the gear position sensor that the user has performed an operation of switching from the D gear to the P gear and the user's gear switching operation is not detected within a preset time from switching to the P gear, the vehicle can control the speaker 201 and the speaker 202 to emit a sound and image drift prompt sound.
  • the preset duration is 0.5 seconds (second, s).
  • the vehicle can also control the sound image drift direction of the prompt sound according to the sensor information.
  • the sound image drift direction of the prompt sound can be controlled to point from the position of the speaker 202 to the position of the speaker 201.
  • the vehicle can control the speaker 201 and the speaker 202 to emit a prompt sound of sound and image drift, including: the vehicle controls the playback intensity of the speaker 201 and the speaker 202.
  • the playing intensity of the prompt sound emitted by the speaker 201 can be controlled to be 20dB, and the playing intensity of the prompt sound emitted by the speaker 202 can be controlled to be 40dB; at time T2 after time T1, the playing intensity of the prompt sound emitted by the speaker 201 can be controlled to be 40dB, and the playing intensity of the prompt sound emitted by the speaker 202 can be controlled to be 20dB.
  • the speaker 201 and the speaker 202 can be controlled to emit a prompt sound whose sound image drift direction is from the position of the speaker 202 to the position of the speaker 201.
  • the time interval between the time T1 and the time T2 may be 100 milliseconds (ms).
  • the time T1 may be the time when the computing platform 150 obtains the sensor information from the gear position sensor.
  • the vehicle can control the speaker 201 and the speaker 202 to emit a prompt sound of sound and image drift, including: the vehicle controls the time delay of the speaker 201 and the speaker 202.
  • the intensity of the prompt sound emitted by speaker 202 can be controlled to be 20dB, and speaker 201 can be controlled not to emit the prompt sound; at time T1+ ⁇ T, the intensity of the prompt sound emitted by speaker 201 can be controlled to be 20dB, and speaker 202 can be controlled not to emit the prompt sound.
  • speaker 201 and speaker 202 can be controlled to emit the prompt sound with the sound image drifting direction from the position of speaker 202 to the position of speaker 201.
  • ⁇ T may be 20 ms.
  • the vehicle can control the speaker 201 and the speaker 202 to emit a prompt sound of sound and image drift, including: the vehicle controls the delay and playback intensity of the speaker 201 and the speaker 202.
  • the playing intensity of the prompt sound emitted by the speaker 202 can be controlled to be 40dB and the speaker 201 can be controlled not to emit the prompt sound; at time T1+ ⁇ T, the playing intensity of the prompt sound emitted by the speaker 201 can be controlled to be 20dB and the speaker 202 can be controlled not to emit the prompt sound.
  • the playing intensity of the prompt sound emitted by the speaker 202 can be controlled to be 20dB and the speaker 201 can be controlled not to emit the prompt sound; at time T2+ ⁇ T, the speaker 202 can be controlled not to emit the prompt sound and the playing intensity of the prompt sound emitted by the speaker 201 can be controlled to be 40dB. In this way, the speaker 201 and the speaker 202 can be controlled to emit the prompt sound with the sound image drifting direction from the position of the speaker 202 to the position of the speaker 201.
  • the speaker 201 and the speaker 202 are used as examples for explanation.
  • the embodiment of the present application does not limit the number of sound-generating devices that emit the sound image drift prompt sound.
  • the sound image drift prompt sound can also be emitted by 3 or more speakers.
  • the speaker 201 and the speaker 202 can be controlled to emit a sound image drift prompt sound.
  • the sound image drift direction of the prompt sound is from the position of the speaker 202 to the position of the speaker 201.
  • the speaker 201 and the speaker 202 can be controlled to emit a sound image drift prompt sound.
  • the sound image drift direction of the prompt sound is from the position of the speaker 202 to the position of the speaker 201.
  • the vehicle when the sensing information collected by the gear sensor determines that the user has performed an operation of switching from P gear to D gear, the vehicle can control speakers 201 and 202 to emit a sound and image drift prompt sound, which is used to prompt that the gear status of the vehicle has changed.
  • the vehicle can also control the sound image drift direction of the prompt sound according to the sensor information.
  • the sound image drift direction of the prompt sound is from the position of the speaker 201 to the position of the speaker 202.
  • a straight-shift gear as an example, but the present application is not limited thereto.
  • it can also be a serpentine gear, a knob gear, or a hand-shift gear.
  • the vehicle display screen includes a display area 203.
  • the vehicle can control the speakers 204 and 205 to emit a sound and image drift prompt sound, which is used to prompt that the gear state of the vehicle has changed.
  • the vehicle display screen includes a speaker 204 and a speaker 205.
  • the touch data collected by the touch sensor indicates that the user's finger slides from bottom to top in the display area 203
  • the vehicle can be switched to the D gear and the speaker 204 and the speaker 205 can be controlled to emit a sound prompt of an image drift, which is used to prompt that the gear position of the vehicle has changed.
  • the sound image drift direction of the prompt sound can be controlled to point from the position of the speaker 205 to the position of the speaker 204 .
  • the sound image drift direction of the prompt sound can be controlled to point from the position of the speaker 204 to the position of the speaker 205 .
  • the gear shift operation can also be executed by detecting a preset gesture.
  • a preset gesture For example, when it is detected that the current vehicle is in the P gear and the image obtained by the camera in the vehicle cabin determines that the driver has made a first preset gesture (for example, a "yeah" gesture or a gesture with two fingers raised), the gear can be controlled to switch from the P gear to the D gear and at least two sound-emitting devices in the cabin can be controlled to emit a sound and image drift prompt sound, which is used to prompt that the gear position of the vehicle has changed.
  • a first preset gesture for example, a "yeah" gesture or a gesture with two fingers raised
  • the gear can be controlled to switch from P gear to D gear and at least two sound-emitting devices in the cockpit can be controlled to emit a sound and image drifting prompt sound, which is used to prompt that the gear position of the vehicle has changed.
  • the first opening threshold is 50%.
  • the first preset duration is 5 seconds (s).
  • the gear when it is detected that the current vehicle is in P gear and data collected by sensors outside the cockpit (for example, one or more of a camera, a lidar or a millimeter-wave radar) determines that there is an obstacle at the rear of the vehicle, the gear can be controlled to switch from P gear to D gear and at least two sound-emitting devices in the cockpit can be controlled to emit a sound and image drift prompt sound, which is used to prompt that the gear position of the vehicle has changed.
  • sensors outside the cockpit for example, one or more of a camera, a lidar or a millimeter-wave radar
  • the gear when it is detected that the current vehicle is in P gear and the voice information collected by the voice sensor determines that the user has issued a voice command to switch to D gear, the gear can be controlled to switch from P gear to D gear and at least two sound-emitting devices in the cockpit can be controlled to emit a sound and image drift prompt sound, which is used to prompt that the vehicle's gear position has changed.
  • At least two sound-emitting devices can be controlled to periodically emit a sound image drift prompt sound.
  • the vehicle can control the speaker 201 and the speaker 202 to emit a sound image drift prompt sound and the broadcast cycle of the prompt sound is 200ms.
  • the vehicle can control the speaker 201 and the speaker 202 to broadcast for 1s (or, 5 broadcast cycles).
  • the position of the above speaker 201, speaker 202, speaker 204 or speaker 205 may include one or more speakers.
  • the multiple speakers may form a speaker group.
  • FIG3 shows a schematic diagram of another application scenario provided by an embodiment of the present application.
  • the sound-emitting devices in at least two different positions can be controlled to emit a prompt sound with a drifting sound image, which is used to prompt that function 1 of the vehicle is turned on.
  • the sound image drift direction of the prompt sound is the normal direction of the plane where the vehicle is located.
  • the sound image drift direction of the prompt sound is from the position of the speaker 301 to the position of the speaker 302.
  • the speaker 201 and the speaker 202 can be controlled to emit a prompt sound with sound image drift, thereby prompting the user that the function 1 of the vehicle is activated.
  • the sound-emitting devices in at least two different positions can be controlled to emit a prompt sound with a drifting sound image, which is used to prompt that function 1 of the vehicle is turned off.
  • the sound image drift direction of the prompt sound is the normal direction of the plane where the vehicle is located.
  • the sound image drift direction of the prompt sound is from the position of the speaker 302 to the position of the speaker 301.
  • the process of controlling the speakers 301 and 302 to emit a prompt sound in which the sound and image drift direction is from the position of the speaker 301 to the position of the speaker 302, or the process of emitting a prompt sound in which the sound and image drift direction is from the position of the speaker 302 to the position of the speaker 301 can refer to the description in the above embodiments and will not be repeated here.
  • the function 1 includes but is not limited to turning on or off the fully automatic driving function, the advanced driver assistance system
  • the present invention relates to an advanced driving assistant system (ADAS) function, an intermediate driving assistance system function, a low-level driving assistance system function, a hybrid automatic voltage control (HAVC) function or an adaptive cruise control (ACC) function, etc.
  • ADAS advanced driving assistant system
  • HAVC hybrid automatic voltage control
  • ACC adaptive cruise control
  • the sound devices in at least two different positions are controlled to emit a prompt sound with a drifting sound image.
  • the sound devices in at least two different positions can be controlled to emit a prompt sound with a drifting sound image, and the prompt sound is used to prompt the activation of the child lock function.
  • the virtual button sensor e.g., touch sensor
  • the function corresponding to a virtual button on the vehicle display screen is the lane departure warning (LDW) function.
  • LDW lane departure warning
  • the position of the above speaker 301 or speaker 302 may include one or more speakers.
  • the multiple speakers may form a speaker group.
  • FIG4 shows a schematic diagram of another application scenario provided by an embodiment of the present application.
  • the left turn signal of the vehicle can be controlled to flash and sound devices at at least two different positions can be controlled to emit a prompt sound of sound and image drift.
  • the sound image drift direction of the prompt sound can be controlled from the passenger seat of the vehicle to the main seat of the vehicle.
  • the sound image drift direction is from the position of speaker 401 to the position of speaker 403.
  • the position of the above speaker 401, speaker 402 or speaker 403 may include one or more speakers.
  • the multiple speakers may form a speaker group.
  • the position of the speaker 401 includes 3 speakers, 2 of which are located on the passenger door and 1 speaker is located on the A-pillar on the passenger side.
  • the right turn signal of the vehicle when the user's operation of pulling the steering lever upward is detected, can be controlled to flash and the sound devices at at least two different positions can be controlled to emit a sound image drift prompt sound.
  • the sound image drift direction of the prompt sound can be controlled from the main driving position of the vehicle to the co-driving position of the vehicle.
  • the sound image drift direction is from the position of speaker 403 to the position of speaker 401.
  • FIG5 shows a schematic diagram of another application scenario provided by an embodiment of the present application.
  • the operation corresponding to the sliding input can be executed and the sound-emitting devices in at least two different positions can be controlled to emit a prompt sound of sound and image drift.
  • the prompt sound is used to prompt the vehicle that the operation corresponding to the sliding input is executed.
  • the first tab includes icons of application 1 to application 8
  • the second tab includes icons of application 9 to application 16
  • control speakers 501 and 502 to emit a prompt sound with the sound and image drifting in the direction from the position of speaker 501 to the position of speaker 502.
  • the speed of the sound image drifting may also be controlled according to the speed of the sliding input.
  • the speaker 501 and the speaker 502 can be controlled to emit a prompt sound indicating that the sound image drift speed is low.
  • the playing intensity of the prompt sound emitted by the speaker 501 can be controlled to be 20dB and the speaker 502 can be controlled not to emit the prompt sound;
  • the playing intensity of the prompt sound emitted by the speaker 502 can be controlled to be 20dB and the speaker 501 can be controlled not to emit the prompt sound.
  • the speaker 501 and the speaker 502 can be controlled to emit a prompt sound indicating that the sound image drifts at a higher speed.
  • the playing intensity of the prompt sound emitted by the speaker 501 can be controlled to be 20dB and the speaker 502 can be controlled not to emit the prompt sound;
  • the playing intensity of the prompt sound emitted by the speaker 502 can be controlled to be 20dB and the speaker 501 can be controlled not to emit the prompt sound.
  • the time T3 is the time when the touch data is collected by the touch sensor.
  • FIG6 shows a schematic diagram of another application scenario provided by an embodiment of the present application.
  • the speaker 601 and the speaker 602 may be controlled to emit a prompt sound of the sound image drift.
  • the sound image drift direction of the prompt sound can be controlled to point from the position of the speaker 601 to the position of the speaker 602.
  • the vehicle can also control the sound and image drift speed of the prompt sound according to the change in the opening degree of the accelerator pedal.
  • the speaker 601 and the speaker 602 can be controlled to emit a prompt sound indicating a lower sound and image drift speed.
  • the speaker 601 and the speaker 602 may be controlled to emit a prompt sound indicating a higher sound and image drift speed.
  • the speaker 601 and the speaker 602 can be controlled to emit a prompt sound of the sound image drift.
  • the sound image drift direction of the prompt sound can be controlled to point from the position of the speaker 602 to the position of the speaker 601.
  • the vehicle can also control the sound and image drift speed of the prompt sound according to the change in the opening degree of the brake pedal.
  • the speaker 601 and the speaker 602 can be controlled to emit a prompt sound indicating a lower sound and image drift speed.
  • the speaker 601 and the speaker 602 can be controlled to emit a prompt sound indicating a higher sound and image drift speed.
  • the implementation process of controlling the loudspeaker 601 and the loudspeaker 602 to emit a prompt tone with a lower or higher sound and image drift speed can refer to the description in the above embodiment, which will not be repeated here.
  • FIG. 7 shows a schematic diagram of another application scenario provided by an embodiment of the present application.
  • the vehicle can display navigation information, current battery level, current vehicle speed and other information through the HUD during driving.
  • a speed limit sign for example, a speed limit of 40 kilometers per hour (Kilometer/hour, Km/h)
  • Km/h a speed limit of 40 kilometers per hour
  • the speaker 601 and the speaker 602 can be controlled to emit a prompt sound with the sound image drifting in the direction from the position of the speaker 602 to the position of the speaker 601.
  • the prompt sound is used to remind the user that there is a speed limit on the road that the vehicle is about to pass, or the prompt sound is used to remind the user to control the vehicle to slow down.
  • the sound image drift speed of the prompt sound can be controlled according to the difference between the current vehicle speed and the speed limit.
  • the speaker 601 and the speaker 602 may be controlled to emit a prompt sound indicating that the sound image drift speed is low.
  • the speaker 601 and the speaker 602 may be controlled to emit a prompt sound indicating that the sound image drift speed is higher.
  • the user can know whether the difference between the current vehicle speed and the speed limit is too large through the audio-visual drift speed, so that the user can quickly control the vehicle speed, which helps to avoid safety hazards caused by speeding and also helps to avoid fines for speeding.
  • FIG8 shows a schematic diagram of another application scenario provided by an embodiment of the present application.
  • a user is driving a vehicle and waiting for passage at a crossroads light.
  • the speaker 601 and the speaker 602 can be controlled to emit a prompt sound with a sound and image drift, which can be used to prompt that the traffic light in the environment where the vehicle is located has changed, or can be used to instruct the user to drive the vehicle through the intersection.
  • the speakers 601 and 602 can be controlled to emit a sound prompt of image drift.
  • the speaker 601 and the speaker 602 can be controlled to emit a sound prompt sound of image drift.
  • the sound of the drifting audio and video prompt can prompt the user to quickly pass the intersection, which helps to avoid traffic congestion caused by long waiting time and also helps to save the user's travel time.
  • the sound image drift direction of the prompt sound is from the position of the speaker 601 to the position of the speaker 602 .
  • FIG. 9 shows a schematic diagram of another application scenario provided by an embodiment of the present application.
  • the speaker 601 and the speaker 602 can be controlled to emit a sound prompt of the image drift.
  • the prompt sound can be used to prompt the vehicle ahead to move forward, or can be used to instruct the user to drive the vehicle forward.
  • the speaker 601 and the speaker 602 can be controlled to emit a sound prompt sound of the sound and image drift.
  • the speaker 601 and the speaker 602 can be controlled to emit a sound prompt sound of image drift.
  • the above describes the scenario of prompting a traffic light change or a vehicle ahead moving forward by using a prompt sound whose sound image drifts from the position of the speaker 601 to the position of the speaker 602, through FIG8 and FIG9 , but the embodiment of the present application is not limited thereto.
  • the user can also be notified of other changes in the environment outside the cabin by using a prompt sound of sound image drift.
  • the user drives the vehicle out of the parking lot (the exit of the parking lot is a slope).
  • the sensor information collected by the sensor outside the cabin determines that the vehicle ahead is sliding down the slope
  • a sound image drifting prompt sound can be used to prompt that the vehicle ahead is sliding down the slope.
  • the sound image drift direction of the prompt sound is from the position of the speaker 602 to the position of the speaker 601.
  • the sound image drift speed of the prompt sound can also be controlled according to the distance between the vehicle and the vehicle in the sliding state. For example, when the distance between the vehicle and the vehicle in the sliding state is greater than a preset distance (for example, 5 meters), the speaker 601 and the speaker 602 can be controlled to emit a prompt sound with a lower sound image drift speed; when the distance between the vehicle and the vehicle in the sliding state is less than or equal to the preset distance, the speaker 601 and the speaker 602 can be controlled to emit a prompt sound with a higher sound image drift speed.
  • a preset distance for example, 5 meters
  • FIG. 10 shows a schematic diagram of another application scenario provided by an embodiment of the present application.
  • At least two sound-emitting devices can be controlled to emit a prompt sound with a drifting sound image, and the prompt sound is used to prompt that the mobile phone is successfully wirelessly charged.
  • the speaker 301 and the speaker 302 can be controlled to emit a sound image drift prompt sound, and the sound image drift direction of the prompt sound is from the position of the speaker 301 to the position of the speaker 302.
  • At least two sound-emitting devices can be controlled to emit a prompt sound with sound and image drifting, and the prompt sound is used to prompt that the wireless charging of the mobile phone has failed.
  • the speaker 301 and the speaker 302 can be controlled to emit a sound image drift prompt sound, and the sound image drift direction of the prompt sound is from the position of the speaker 302 to the position of the speaker 301.
  • FIG. 11 shows a schematic diagram of another application scenario provided by an embodiment of the present application.
  • the vehicle when a user is detected in the main driving area, the vehicle can activate the camera in the cabin to perform face recognition on the user in the main driving area.
  • the speaker 1110 and the speaker 1120 can be controlled to emit a sound and image drift prompt sound, which is used to prompt that the face recognition is successful.
  • the voice device when face recognition is successful, can also be controlled to emit a prompt sound "Face recognition successful".
  • the sound image drift direction of the prompt sound can be controlled to point from the position of the speaker 1110 to the position of the speaker 1120.
  • the speaker 1110 and the speaker 1120 may be controlled to emit a prompt sound of sound and image drift, and the prompt sound is used to prompt the user that the face recognition fails.
  • the voice device may be controlled to issue a voice message “face recognition failed”.
  • the sound image drift direction of the prompt sound can be controlled to point from the position of the speaker 1120 to the position of the speaker 1110.
  • the above is explained by taking the face recognition of the user as an example, and the embodiments of the present application are not limited thereto.
  • the sound devices at at least two different positions can be controlled to emit a prompt sound of the sound image drift.
  • FIG12 shows a schematic diagram of another application scenario provided by an embodiment of the present application.
  • the speaker 1210 and the speaker 1220 can be controlled to emit a prompt sound with sound and image drifting, which is used to remind the passenger on the right side of the second row that the seat belt is not wearing a seat belt.
  • the sound image drift direction of the prompt sound may be directed from the position of the speaker 1210 to the position of the speaker 1220 , or may be directed from the position of the speaker 1220 to the position of the speaker 1210 .
  • the position of the above speaker 1210 or speaker 1220 may include one or more speakers.
  • the multiple speakers may form a speaker group.
  • the position of the speaker 1210 includes three speakers, two of which are located on the right door of the second row and one speaker is located on the C-pillar on the right side of the second row.
  • the position of the user who is not wearing a seat belt in the driver's cabin can be prompted by the sound reminder with sound and image drift, so that the driver can promptly remind the user to fasten the seat belt, which helps to improve the user's driving experience.
  • FIG13 shows a schematic diagram of the speaker distribution in a vehicle provided by an embodiment of the present application.
  • the vehicle may include speakers 1 to 10.
  • speakers 1 and 5 may be located in the main driving area of the vehicle
  • speakers 3 and 7 may be located in the co-driving area of the vehicle
  • speakers 2 and 6 may be located in the second row left area of the vehicle
  • speakers 4 and 8 may be located in the second row right area of the vehicle
  • speakers 9 and 10 may be located near the gear position of the vehicle.
  • the vehicle may determine the at least two sound emitting devices according to the area where the user is located.
  • the speaker 1 and the speaker 5 can be controlled to emit a prompt sound of sound and image drift.
  • the speaker 2 and the speaker 6 can be controlled to emit a prompt sound of the sound image drift.
  • a mapping relationship between user input and sound-emitting devices is stored in the vehicle, and the vehicle can determine the at least two sound-emitting devices based on the mapping relationship and the sensor information.
  • Table 1 shows a mapping relationship between user input, sound generating device and sound image drift direction.
  • the sensing information collected by the gear position sensor determines that the user switches the gear from the P gear to the D gear, it can be determined to control the speaker 9 and the speaker 10 to emit a sound image drift prompt sound according to the mapping relationship in Table 1 and the sensing information.
  • the sound image drift direction of the prompt sound can also be controlled from the position of the speaker 9 to the position of the speaker 10 according to the mapping relationship and the sensor information.
  • the above The mapping relationship in Table 1 and the sensor information determine to control the speaker 5 and the speaker 6 to emit the sound image drift prompt sound.
  • the sound image drift direction of the prompt sound can also be controlled from the position of the speaker 6 to the position of the speaker 5 according to the mapping relationship and the sensor information.
  • Table 1 above is merely illustrative, and the embodiments of the present application are not limited thereto.
  • the speaker 1 and the speaker 3 can be controlled to emit a prompt sound of the sound image drift.
  • a mapping relationship between user input and sound emitting device is stored in the vehicle, and the vehicle determines the at least two sound emitting devices based on the mapping relationship and the sensor information, including: the vehicle determines the at least two sound emitting devices based on the mapping relationship, the sensor information and the user's location.
  • Table 2 shows a mapping relationship between the area where the user is located, the user input, the sound-generating device, and the sound image drift direction.
  • a voice command e.g., "turn on the seat ventilation function”
  • the speaker 3 and the speaker 7 can be controlled to issue a sound image drift prompt.
  • the sound image drift direction of the prompt sound can be controlled to point from the position of the speaker 3 to the position of the speaker 7.
  • FIG14 shows a schematic flow chart of a control method 1400 provided in an embodiment of the present application.
  • the method 1400 may be performed by a vehicle (e.g., a vehicle), or the method 1400 may be performed by the above-mentioned computing platform, or the method 1400 may be performed by a system consisting of a computing platform and at least two sound-generating devices, or the method 1400 may be performed by a system-on-a-chip (SoC) in the above-mentioned computing platform, or the method 1400 may be performed by a processor in the computing platform.
  • SoC system-on-a-chip
  • the method 1400 includes:
  • the acquiring of sensor information includes: the computing platform acquiring sensor information collected by a sensor in the vehicle.
  • the sensor may be a gear position sensor, a physical button or a sensor corresponding to a physical button, a steering wheel lever sensor, a voice sensor, an accelerator pedal sensor, a brake pedal sensor, or a touch sensor.
  • the sensing information collected by the gear position sensor can be used to indicate a change in gear position.
  • the sensing information collected by the sensor corresponding to the physical button can be used to indicate whether to turn on or off the corresponding function.
  • the sensing information collected by the steering lever sensor can be used to indicate turning in a certain direction.
  • the sensing information collected by the voice sensor includes a user's voice command, which is used to instruct the vehicle to execute Perform corresponding operations.
  • the sensing information collected by the accelerator pedal sensor is used to indicate the opening change information of the accelerator pedal.
  • the sensing information collected by the brake pedal sensor is used to indicate the opening change information of the brake pedal.
  • the sensing information collected by the touch sensor is used to indicate that the user has clicked a virtual button (the virtual button may correspond to turning a function on or off), or the sensing information is used to indicate the direction and speed of the user's sliding input on the vehicle display screen.
  • the method 1400 further includes: controlling the sound image drift direction of the prompt sound according to the sensing information.
  • the sensor information is used to indicate adjustment from a first gear to a second gear
  • the sound image drift direction of the prompt sound is controlled according to the sensor information, including: controlling the sound image drift direction of the prompt sound to be a first direction, and the first direction points from the first gear to the second gear.
  • the speaker 201 and the speaker 202 can be controlled to emit a sound image drift prompt sound, and the sound image drift direction of the prompt sound is from the position of the speaker 202 to the position of the speaker 201.
  • the above direction from the position of the speaker 202 to the position of the speaker 201 corresponds to the direction from the D position to the P position.
  • the sensor information is used to indicate whether the first function is activated or deactivated, and the sound and image drift direction of the prompt sound is controlled based on the sensor information, including: controlling the sound and image drift direction of the prompt sound to be a second direction, and the second direction is the normal to the plane where the vehicle is located.
  • the speaker 301 and the speaker 302 can be controlled to emit a sound image drift prompt sound, and the sound image drift direction of the prompt sound can be directed from the position of the speaker 301 to the position of the speaker 302.
  • the direction from the position of the speaker 301 to the position of the speaker 302 corresponds to the normal direction of the plane where the vehicle is located.
  • the sensor information includes voice instructions, and the voice instructions are used to indicate that the second function is turned on.
  • the sound and image drift direction of the prompt sound is controlled, including: controlling the sound and image drift direction of the prompt sound to be a second direction, and the second direction is the normal to the plane where the vehicle is located.
  • the sensor information is used to indicate turning to a third direction
  • the controlling the sound image drift direction of the prompt sound according to the sensor information includes: controlling the sound image drift direction of the prompt sound to be the third direction.
  • the speakers 401, 402 and 403 can be controlled to emit a sound prompt of image drift, and the direction of the image drift of the sound prompt is from the position of the speaker 401 to the position of the speaker 403.
  • the method 1400 further includes: controlling the direction in which the ambient light is turned on, the direction in which the ambient light is turned on corresponding to the direction in which the sound and image drift, and the vehicle includes the ambient light.
  • the lighting direction of the ambient light can be controlled to correspond to the direction of the sound image drift.
  • the cabin of the vehicle includes ambient lights that run through the main driver area and the co-driver area, and the lighting direction of the ambient light can be controlled to light up in sequence from the co-driver area to the main driver area.
  • the method 1400 further includes: controlling the sound image drift speed of the prompt sound according to the sensing information.
  • the sensor information is used to indicate the change in the opening of an accelerator pedal or a brake pedal, and the sound and image drift speed of the prompt sound is controlled based on the sensor information, including: controlling the sound and image drift speed of the prompt sound based on the change in the opening of the accelerator pedal or the brake pedal.
  • the sound image drift speed of the prompt sound can be controlled to gradually increase.
  • the sound image drift direction of the prompt sound is from the rear of the vehicle to the head of the vehicle.
  • the sound image drift speed of the prompt sound can be controlled to gradually increase.
  • the sound image drift direction of the prompt sound is from the head of the vehicle to the tail of the vehicle.
  • the sensor information is used to indicate a sliding input of a user on a display screen
  • the controlling the sound and image drift speed of the prompt sound based on the sensor information includes: controlling the sound and image drift speed of the prompt sound based on the speed of the sliding input.
  • the sound and image drift speed of the prompt sound emitted by the speaker 501 and the speaker 502 can be controlled according to the speed of the sliding input.
  • the method 1400 further includes: determining the at least two sound emitting devices according to the area where the user is located.
  • the sound source position can be determined by the voice information collected by the microphone array.
  • the at least two sound-emitting devices can be determined from a plurality of sound-emitting devices in the main driving area.
  • the sensing information is sensing information collected by a touch sensor and is used to instruct the user to click a virtual button on the display screen (to turn on or off a certain function). It can be determined through images collected by a camera in the cockpit that the user in the passenger seat area has clicked the virtual button, so that the at least two sound-emitting devices can be determined from the multiple sound-emitting devices in the passenger seat area.
  • the vehicle includes a mapping relationship between user input and a sound-emitting device
  • the method 1400 further includes: determining the at least two sound-emitting devices based on the mapping relationship and the sensor information.
  • the mapping relationship may be as shown in Table 1 or Table 2.
  • the at least two sound-emitting devices may be determined according to the mapping relationship and the sensing information.
  • the above determination of the at least two sound emitting devices according to the mapping relationship and the sensor information may also be understood as determining the at least two sound emitting devices according to the mapping relationship and the user input indicated by the sensor information.
  • FIG15 shows a schematic flow chart of a control method 1500 provided in an embodiment of the present application.
  • the method 1500 may be executed by a vehicle (e.g., a vehicle), or the method 1500 may be executed by the above-mentioned computing platform, or the method 1500 may be executed by a system consisting of a computing platform and a head-up display device, or the method 1500 may be executed by a SoC in the above-mentioned computing platform, or the method 1500 may be executed by a processor in the computing platform.
  • the method 1500 includes:
  • the acquiring of sensor information includes: the computing platform acquiring sensor information collected by sensors in the vehicle.
  • the sensor may be a sensor outside the cabin (e.g., one or more of a camera, a laser radar, a millimeter-wave radar, or a centimeter-wave radar), a seat belt status sensor in the cabin, a sensor for collecting user biometrics, etc.
  • S1520 controls the sound-emitting devices at at least two different positions in the vehicle to emit a sound image drift prompt sound
  • the prompt sound is used to prompt the change of the environment in which the vehicle is located, or the prompt sound is used to prompt the position of the object to be prompted, or the prompt sound is used to prompt the success or failure of the user's biometric recognition, or the prompt sound is used to prompt the connection status between the vehicle and the mobile terminal, or the prompt sound is used to prompt the success or failure of the vehicle's wireless charging of the mobile terminal.
  • the sensing information is used to indicate changes in traffic lights in the environment in which the vehicle is located, and the prompt sound is used to prompt the changes in the traffic lights.
  • the method also includes: controlling the sound image drift direction of the prompt sound to be the direction in which the vehicle is moving.
  • the speaker 601 and the speaker 602 can be controlled to emit a sound image drift prompt sound, and the sound image drift direction of the prompt sound points from the position of the speaker 601 to the position of the speaker 602 .
  • the sound-emitting devices at at least two different positions in the vehicle are controlled to emit a sound and image drifting prompt sound, including: when the traffic light indicates that vehicles in a certain lane can pass through the intersection and no vehicle movement is detected within a preset time period from the time the traffic light indicates that vehicles in a certain lane can pass through the intersection, the sound-emitting devices at at least two different positions in the vehicle are controlled to emit a sound and image drifting prompt sound.
  • the sound of the drifting audio and video prompt can prompt the user to quickly pass the intersection, which helps to avoid traffic congestion caused by long waiting time and also helps to save the user's travel time.
  • the sensor information is used to indicate that the vehicle is currently in a congested road section and another vehicle in front of the vehicle is moving forward, and the prompt sound is used to prompt the other vehicle to move forward.
  • the speaker 601 and the speaker 602 can be controlled to emit a sound image drift prompt sound, which can be used to prompt the vehicle in front to move forward, or the prompt sound can be used to prompt the user to drive the vehicle forward.
  • the method 1500 further includes: controlling the sound image drift direction of the prompt sound to be the direction in which the vehicle is moving.
  • the sound image drift direction of the prompt sound is directed from the position of the speaker 601 to the position of the speaker 602 .
  • the sound-emitting device at at least two different positions in the vehicle is controlled to emit a sound prompt sound of sound and image drift, including: when it is detected that the vehicle in front is moving forward and the distance between the vehicle and the vehicle in front is greater than or equal to a preset distance, the sound-emitting device at at least two different positions in the vehicle is controlled to emit a sound prompt sound of sound and image drift.
  • the sound indicator reminds users to follow the vehicle in front closely, which helps to avoid being trapped on congested roads.
  • the sensor information is a sensor for collecting user biometrics (e.g., a camera, a fingerprint sensor, an iris sensor), and when the biometrics of the user are successfully identified based on the sensor information, at least two sound-emitting devices can be controlled to emit a sound image drift prompt sound.
  • the sound image drift direction of the prompt sound is the normal direction of the plane where the vehicle is located.
  • the speaker 1110 and the speaker 1120 can be controlled to emit a sound image drift prompt sound.
  • the sound image drift direction of the prompt sound is from the position of the speaker 1110 to the position of the speaker 1120.
  • the central control screen can be controlled to display information related to the user in the main driving area.
  • the information related to the user in the main driving area includes applications, cards, wallpapers or animations related to the user in the main driving area.
  • the speaker 3 and the speaker 7 can be controlled to emit a sound image drift prompt sound.
  • the sound image drift direction of the prompt sound is from the position of the speaker 3 to the position of the speaker 7.
  • the passenger entertainment screen can be controlled to display information related to the user in the passenger area.
  • the information related to the user in the passenger area includes applications, cards, wallpapers or animations related to the user in the passenger area.
  • the sensor information is used to indicate that a user in a certain area of the vehicle is not wearing a seat belt
  • the method 1500 also includes: controlling the sound and image drift direction of the prompt sound to be from the area where the user is located to the area where the driver is located, or from the area where the driver is located to the area where the user is located.
  • the speaker 1210 and the speaker 1220 can be controlled to emit a prompt sound with sound and image drifting, which is used to remind the passenger on the right side of the second row that the seat belt is not wearing a seat belt.
  • Embodiments of the present application also provide an apparatus for implementing any of the above methods.
  • an apparatus includes units (or means) for implementing each step executed by a vehicle (e.g., a vehicle), or a computing platform in a vehicle, or an SoC in a computing platform, or a processor in a computing platform in any of the above methods.
  • FIG16 shows a schematic block diagram of a control device 1600 provided in an embodiment of the present application.
  • the control device 1600 includes: an acquisition unit 1610, which is used to acquire sensor information; a control unit 1620, which is used to control the sound-generating devices at at least two different positions in the vehicle to emit a sound image drift prompt sound according to the sensor information, and the prompt sound is used to prompt the state change of the vehicle, or the prompt sound is used to prompt the change of the environment in which the vehicle is located, or the prompt sound is used to prompt the position of the object to be prompted, or the prompt sound is used to prompt the success or failure of the biometric recognition of the user, or the prompt sound is used to prompt the connection status of the vehicle and the mobile terminal, or the prompt sound is used to prompt the success or failure of the wireless charging of the mobile terminal by the vehicle.
  • control unit 1620 is used to control the sound image drift direction of the prompt sound according to the sensing information.
  • the sensor information is used to indicate adjustment from a first gear to a second gear
  • the control unit 1620 is used to: control the sound image drift direction of the prompt sound to be a first direction, and the first direction points from the first gear to the second gear.
  • the sensor information is used to indicate whether the first function is activated or deactivated, and the control unit 1620 is used to control the sound image drift direction of the prompt sound to be a second direction, and the second direction is the normal direction of the plane where the vehicle is located.
  • the sensing information is used to indicate turning to a third direction
  • the control unit 1620 is used to control the sound and image drift direction of the prompt sound to be the third direction.
  • control unit 1620 is further used to control the direction in which the ambient light is turned on, wherein the direction in which the ambient light is turned on corresponds to the direction in which the sound and image drift, and the vehicle includes the ambient light.
  • control unit 1620 is used to control the sound and image drift speed of the prompt sound according to the sensing information.
  • the sensor information is used to indicate the change in the opening of an accelerator pedal or a brake pedal
  • the control unit 1620 is used to control the sound and image drift speed of the prompt sound according to the change in the opening of the accelerator pedal or the brake pedal.
  • the sensing information is used to indicate a sliding input by a user on a display screen
  • the control unit 1620 is used to control a sound and image drift speed of the prompt sound according to a speed of the sliding input.
  • the device 1600 further includes: a first determining unit, configured to determine the at least two sound emitting devices according to an area where the user is located.
  • the vehicle includes a mapping relationship between the user input and the sound-generating device
  • the device 1600 further includes: a second determining unit, Used to determine the at least two sound-emitting devices according to the mapping relationship and the sensing information.
  • the acquisition unit 1610 may be the computing platform in FIG. 1 or a processing circuit, a processor, or a controller in the computing platform.
  • the acquisition unit 1610 is the processor 151 in the computing platform, and the processor 151 may acquire sensor information.
  • the processor 151 may acquire gear information sent by a gear sensor, and the gear information is used to indicate switching from the P gear to the D gear.
  • control unit 1620 may be the computing platform in Figure 1 or a processing circuit, processor or controller in the computing platform. Taking the control unit 1620 as the processor 152 in the computing platform as an example, the processor 152 may control the at least two sound-generating devices to emit a sound image drift prompt sound according to the gear information obtained by the processor 151, and the prompt sound is used to prompt the state change of the vehicle.
  • the processor 152 may also control the sound image drift direction of the prompt sound. For example, when it is determined that the gear information indicates switching from the P gear to the D gear, the processor 152 may control the sound image drift direction of the prompt sound to point from the position of the speaker 201 to the position of the speaker 202.
  • the processor 152 may also control the sound image drift speed of the prompt sound.
  • the functions implemented by the above acquisition unit 1610 and the functions implemented by the control unit 1620 can be implemented by different processors, or can also be implemented by the same processor, which is not limited in the embodiment of the present application.
  • the division of the units in the above device is only a division of logical functions. In actual implementation, they can be fully or partially integrated into one physical entity, or they can be physically separated.
  • the units in the device can be implemented in the form of a processor calling software; for example, the device includes a processor, the processor is connected to a memory, and instructions are stored in the memory.
  • the processor calls the instructions stored in the memory to implement any of the above methods or realize the functions of the units of the device, wherein the processor is, for example, a general-purpose processor, such as a CPU or a microprocessor, and the memory is a memory in the device or a memory outside the device.
  • the units in the device can be implemented in the form of hardware circuits, and the functions of some or all of the units can be realized by designing the hardware circuits.
  • the hardware circuit can be understood as one or more processors; for example, in one implementation, the hardware circuit is an ASIC, and the functions of some or all of the above units are realized by designing the logical relationship of the components in the circuit; for example, in another implementation, the hardware circuit can be implemented by PLD.
  • FPGA as an example, it can include a large number of logic gate circuits, and the connection relationship between the logic gate circuits is configured through the configuration file, so as to realize the functions of some or all of the above units. All units of the above device may be implemented entirely in the form of a processor calling software, or entirely in the form of a hardware circuit, or partially in the form of a processor calling software and the rest in the form of a hardware circuit.
  • a processor is a circuit with the ability to process signals.
  • the processor may be a circuit with the ability to read and run instructions, such as a CPU, a microprocessor, a GPU, or a DSP; in another implementation, the processor may implement certain functions through the logical relationship of a hardware circuit, and the logical relationship of the hardware circuit is fixed or reconfigurable, such as a hardware circuit implemented by an ASIC or PLD, such as an FPGA.
  • the process of the processor loading a configuration document to implement the hardware circuit configuration can be understood as the process of the processor loading instructions to implement the functions of some or all of the above units.
  • it can also be a hardware circuit designed for artificial intelligence, which can be understood as an ASIC, such as an NPU, TPU, DPU, etc.
  • each unit in the above device can be one or more processors (or processing circuits) configured to implement the above method, such as: CPU, GPU, NPU, TPU, DPU, microprocessor, DSP, ASIC, FPGA, or a combination of at least two of these processor forms.
  • processors or processing circuits
  • the SoC may include at least one processor for implementing any of the above methods or implementing the functions of each unit of the device.
  • the type of the at least one processor may be different, for example, including CPU and FPGA, CPU and artificial intelligence processor, CPU and GPU, etc.
  • An embodiment of the present application also provides a device, which includes a processing unit and a storage unit, wherein the storage unit is used to store instructions, and the processing unit executes the instructions stored in the storage unit so that the device executes the method or steps executed by the above embodiment.
  • the processing unit may be the processor 151 - 15n shown in FIG. 1 .
  • Fig. 17 shows a schematic block diagram of a control system 1700 provided in an embodiment of the present application.
  • the control system 1700 includes at least two sound generating devices and a computing platform, wherein the computing platform may include the control device 1600 described above.
  • control system 1700 also includes one or more sensors.
  • An embodiment of the present application also provides a vehicle, which may include the above-mentioned control device 1600 or control system 1700.
  • the vehicle may be a vehicle.
  • the embodiment of the present application further provides a computer program product, which includes: a computer program code, and when the computer program code is executed on a computer, the computer executes the above method.
  • the present application also provides a computer-readable medium storing program code.
  • program code runs on a computer, it enables the computer to execute the above method.
  • each step of the above method can be completed by an integrated logic circuit of hardware in a processor or an instruction in the form of software.
  • the method disclosed in conjunction with the embodiment of the present application can be directly embodied as a hardware processor for execution, or a combination of hardware and software modules in a processor for execution.
  • the software module can be located in a mature storage medium in the art such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, or a power-on erasable programmable memory, a register, etc.
  • the storage medium is located in a memory, and the processor reads the information in the memory and completes the steps of the above method in conjunction with its hardware. To avoid repetition, it is not described in detail here.
  • the memory may include a read-only memory and a random access memory, and provide instructions and data to the processor.
  • the size of the serial numbers of the above-mentioned processes does not mean the order of execution.
  • the execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed.
  • Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the computer software product is stored in a storage medium, including several instructions for a computer device (which can be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk, and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Signal Processing (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

一种控制方法、装置和运载工具,该方法包括:获取传感信息;根据该传感信息,控制该运载工具中至少两个不同位置的发声装置发出声像漂移的提示音,该提示音用于提示该运载工具的状态变化。该控制方法可以应用于智能汽车或者电动汽车,用户可以通过声像漂移的提示音直观地明确人机交互结果与运载工具的状态变化,有助于提升用户的人机交互体验。

Description

一种控制方法、装置和运载工具
本申请要求在2022年10月28日提交中国国家知识产权局、申请号为202211337615.4的中国专利申请的优先权,发明名称为“一种控制方法、装置和运载工具”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及人机交互领域,并且更具体地,涉及一种控制方法、装置和运载工具。
背景技术
当前驾驶员在车辆中执行某些操作后,仍须通过仪表显示屏或者中控屏上的图像或者文本信息,确认人和车辆的交互结果有效。例如,驾驶员在将车辆的档位从驻车档(P档)切换至前进档(D档)后,还需要通过仪表显示屏上显示的文本信息“D”来确认车辆已经切换至D档。这样增加了驾驶员对人机交互结果的认知负荷,对用户的人机交互体验造成影响。
发明内容
本申请实施例提供一种控制方法、装置和运载工具,有助于降低用户对人机交互结果的认知负荷,从而有助于提升用户的人机交互体验。
本申请中的运载工具可以包括路上交通工具、水上交通工具、空中交通工具、工业设备、农业设备、或娱乐设备等。例如运载工具可以为车辆,该车辆为广义概念上的车辆,可以是交通工具(如商用车、乘用车、摩托车、飞行车、火车等),工业车辆(如:叉车、挂车、牵引车等),工程车辆(如挖掘机、推土车、吊车等),农用设备(如割草机、收割机等),游乐设备,玩具车辆等,本申请实施例对车辆的类型不作具体限定。再如,运载工具可以为飞机、或轮船等交通工具。
第一方面,提供了一种控制方法,该方法包括:获取传感信息;根据该传感信息,控制该运载工具中至少两个不同位置的发声装置发出声像(sound image,或者,soundstage)漂移的提示音,该提示音用于提示该运载工具的状态变化。
本申请实施例中,根据获取的传感信息,运载工具可以通过声像漂移的提示音向用户提示运载工具的状态变化。这样,用户可以通过声像漂移的提示音直观地明确人机交互结果与运载工具的状态变化,有助于降低用户对人机交互结果的认知负荷,有助于提升用户的人机交互体验。
声像可以用来表现通过发声装置发出的声音的深度、高度和宽度中的一项或多项。声像漂移(sound image shift)例如可以是在某个时间区间内,声音的声像位置朝着某个方向发生变化或者移动。这样,声像漂移的提示音可以给用户带来声音空间位置变化的体验感。
本申请实施例中,在用户执行人机交互输入时,运载工具可以获取到相应的传感信息,从而运载工具可以通过向用户反馈空间位置变化的提示音,使得用户明确运载工具的状态正在发生变化或者运载工具对用户的人机交互输入进行了响应。这样,有助于提升用户的人机交互体验,也有助于提升运载工具的智能化程度和科技感。
在一些可能的实现方式中,该获取传感信息,包括:获取传感器采集的传感信息。
在一些可能的实现方式中,该传感器包括物理按键或者实体按键对应的传感器、触摸传感器、语音传感器、视觉传感器等。
该控制该运载工具中至少两个不同位置的发声装置发出声像漂移的提示音,也可以理解为控制该运载工具中至少两个不同位置的发声装置发出声像能漂移的提示音。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:根据该传感信息,控制该提示音的声像漂移方向。
本申请实施例中,可以通过传感信息控制提示音的声像漂移方向。这样,提示音的声像漂移方向与用户输入相对应,可以使得用户进一步明确人机交互结果与运载工具的状态变化,有助于提升用户的人 机交互体验。
结合第一方面,在第一方面的某些实现方式中,该传感信息用于指示从第一档位调整至第二档位,该根据该传感信息,控制该提示音的声像漂移方向,包括:控制该提示音的声像漂移方向为第一方向,该第一方向由该第一档位指向该第二档位。
本申请实施例中,在检测到用户执行由第一档位调整至第二档位的操作时,可以控制该提示音的声像漂移方向为该第一方向。这样,无需用户借助其他设备或者装置确认运载工具的档位变化,有助于降低用户在执行换档操作时的认知负荷,从而有助于提升用户在执行换档操作时的人机交互体验。
在一些可能的实现方式中,该运载工具的档位为旋钮档位,该传感信息用于指示从第一档位调整至第二档位的旋转方向,该根据该传感信息,控制该提示音的声像漂移方向,包括:控制该提示音的声像漂移方向为该旋转方向。
在一些可能的实现方式中,该运载工具的档位为怀档,该传感信息用于指示第一拨杆方向,该根据该传感信息,控制该提示音的声像漂移方向,包括:控制该提示音的声像漂移方向为该第一拨杆方向。
结合第一方面,在第一方面的某些实现方式中,该传感信息用于指示第一功能启动或者关闭,该根据该传感信息,控制该提示音的声像漂移方向,包括:控制该提示音的声像漂移方向为第二方向,该第二方向为该运载工具所在平面的法向。
本申请实施例中,通过控制提示音的声像漂移方向为第二方向,可以使得用户明确第一功能的开启或者关闭。这样,无需用户借助其他设备或者装置确认第一功能开启或者关闭的情况,有助于降低用户对第一功能开启或者关闭的交互结果的认知负荷,从而有助于提升用户的人机交互体验。
在一些可能的实现方式中,该控制该提示音的声像漂移方向为第二方向,包括:在该传感信息用于指示第一功能开启时,控制该提示音的声像漂移方向为该第二方向的正方向;或者,在该传感信息用于指示第一功能关闭时,控制该提示音的声像漂移方向为该第二方向的负方向。
结合第一方面,在第一方面的某些实现方式中,该传感信息用于指示向第三方向转向,该根据该传感信息,控制该提示音的声像漂移方向,包括:控制该提示音的声像漂移方向为该第三方向。
本申请实施例中,通过控制提示音的声像漂移方向为第三方向,可以使得用户明确运载工具对用户执行转向操作进行了响应。这样,无需用户通过仪表显示屏上的提示信息来确认运载工具对该转向操作的交互结果,有助于降低用户对转向操作的交互结果的认知负荷,从而有助于提升用户在控制运载工具转向时的人机交互体验。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:控制氛围灯点亮的方向,该氛围灯点亮的方向与该声像漂移方向相对应,该运载工具包括该氛围灯。
本申请实施例中,通过控制氛围灯的点亮方向与该声像漂移方向相对应,可以使得用户进一步明确人机交互结果与运载工具的状态变化,有助于提升用户的人机交互体验。
在一些可能的实现方式中,该传感信息用于指示移动终端在运载工具中的无线充电区域充电成功或者失败,该根据该传感信息,控制该提示音的声像漂移方向,包括:控制该提示音的声像漂移方向为该第二方向。
在一些可能的实现方式中,该根据该传感信息,控制该提示音的声像漂移方向,包括:在该传感信息指示移动终端在运载工具中的无线充电区域充电成功时,控制该提示音的声像漂移方向为该第二方向的正方向;或者,在该传感信息指示移动终端在运载工具中的无线充电区域充电失败时,控制该提示音的声像漂移方向为该第二方向的负方向。
在一些可能的实现方式中,该传感信息用于指示移动终端与运载工具之间的无线连接成功或者失败,该根据该传感信息,控制该提示音的声像漂移方向,包括:控制该提示音的声像漂移方向为该第二方向。
示例性的,移动终端与运载工具之间的无线连接包括蓝牙连接。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:根据该传感信息,控制该提示音的声像漂移速度。
本申请实施例中,可以通过传感信息控制提示音的声像漂移速度。这样,通过提示音的声像漂移速度模拟传感信息指示的用户输入的速度,有助于进一步提升用户的人机交互体验。
结合第一方面,在第一方面的某些实现方式中,该传感信息用于指示加速踏板或者制动踏板的开度变化,该根据该传感信息,控制该提示音的声像漂移速度,包括:根据该加速踏板或者该制动踏板的开度变化,控制该提示音的声像漂移速度。
本申请实施例中,通过加速踏板或者制动踏板的开度变化,控制该提示音的声像漂移速度。这样,通过不同的声像漂移速度模拟运载工具的不同加速状态,有助于提升用户在控制车辆加速或者减速时的人机交互体验。
结合第一方面,在第一方面的某些实现方式中,该传感信息用于指示用户在显示屏上的滑动输入,该根据该传感信息,控制该提示音的声像漂移速度,包括:根据该滑动输入的速度,控制该提示音的声像漂移速度。
本申请实施例中,通过用户在显示屏上滑动输入的速度控制提示音的声像漂移速度。这样,可以使得用户在显示屏上的滑动输入的速度与声像漂移速度相对应,有助于提升用户在显示屏上执行滑动输入时的人机交互体验。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:根据用户所在的区域,确定该至少两个发声装置。
本申请实施例中,通过用户所在的区域可以确定该至少两个发声装置。这样,可以控制用户所在的区域的至少两个不同位置的发声装置发出声像漂移的提示音,从而进一步让用户在其所处区域中明确人机交互结果与运载工具的状态变化,有助于提升用户的人机交互体验。
结合第一方面,在第一方面的某些实现方式中,该运载工具中包括用户输入与发声装置的映射关系,该方法还包括:根据该映射关系和该传感信息,确定该至少两个发声装置。
本申请实施例中,可以通过用户输入与发声装置的映射关系以及该传感信息确定该至少两个发声装置。这样,可以降低确定该至少两个发声装置时的计算开销,有助于节省运载工具的功耗。
本申请实施例还提供了一种控制方法,该控制方法包括:在检测到用户针对运载工具的输入时,控制该运载工具中至少两个不同位置的发声装置发出声像漂移的提示音,该提示音用于提示该运载工具在执行该输入对应的操作。
第二方面,提供了一种控制方法,该方法包括:获取传感信息;根据该传感信息,控制该运载工具中至少两个不同位置的发声装置发出声像漂移的提示音,该提示音用于提示该运载工具所处环境的状态变化,或者,该提示音用于提示待提示对象的相对位置,或者,该提示音用于提示对用户的生物特征识别结果,或者,该提示音用于提示该运载工具与移动终端的连接状态,或者,该提示音用于提示该运载工具对移动终端无线充电成功或者失败。
结合第二方面,在第二方面的某些实现方式中,该传感信息用于指示运载工具所处环境的交通指示灯变化,该提示音用于提示该交通指示灯变化,或者,该提示音用于提示用户驾驶车辆通过路口。
在一些可能的实现方式中,以该运载工具是车辆为例,在车辆所处路口的交通指示灯指示处于某个车道的车辆可以通过该路口时,该方法还包括:控制该提示音的声像漂移方向为该车辆前进的方向。
在一些可能的实现方式中,该控制该运载工具中至少两个不同位置的发声装置发出声像漂移的提示音,包括:在该交通指示灯指示处于某个车道的车辆可以通过该路口起的预设时长内未检测到车辆前进时,控制该运载工具中至少两个不同位置的发声装置发出声像漂移的提示音。
本申请实施例中,在交通指示灯指示可以通行时,若用户的注意力不在交通指示灯上,可以通过声像漂移的提示音提示用户快速通过该路口,有助于避免由于等待时间过长而导致的压车现象,也有助于节省用户的通行时间。
结合第二方面,在第二方面的某些实现方式中,以该运载工具是车辆为例,该传感信息用于指示车辆正处于拥堵路段且车辆前方的另一车辆向前移动,该提示音用于提示该另一车辆向前移动。
在一些可能的实现方式中,该方法还包括:控制该提示音的声像漂移方向为该车辆前进的方向。
在一些可能的实现方式中,该控制该运载工具中至少两个不同位置的发声装置发出声像漂移的提示音,包括:在检测到该另一车辆向前移动且车辆与该另一车辆之间的距离大于或者等于预设距离时,控制该运载工具中至少两个不同位置的发声装置发出声像漂移的提示音。
本申请实施例中,在车辆处于拥堵路段且前方车辆移动时,如果用户的注意力不在交通指示灯上,那么可以通过声像漂移的提示音提示用户紧跟前方车辆,有助于避免拥堵路段上的压车现象。
结合第二方面,在第二方面的某些实现方式中,该传感信息用于指示运载工具中某个区域的用户未系安全带,该方法还包括:控制该提示音的声像漂移方向为由该用户所在的区域指向驾驶员所在的区域,或者,由驾驶员所在的区域指向该用户所在的区域。
本申请实施例中,在运载工具中某个用户未系安全带时,可以通过声像漂移的提示音向驾驶员提示 未系安全带的用户所在的区域,从而帮助驾驶员及时提醒该用户系好安全带,有助于提升用户的驾乘体验。
第三方面,提供了一种控制装置,该控制装置包括:获取单元,用于获取传感信息;控制单元,用于根据该传感信息,控制该运载工具中至少两个不同位置的发声装置发出声像漂移的提示音,该提示音用于提示该运载工具的状态变化。
结合第三方面,在第三方面的某些实现方式中,该控制单元,用于:根据该传感信息,控制该提示音的声像漂移方向。
结合第三方面,在第三方面的某些实现方式中,该传感信息用于指示从第一档位调整至第二档位,该控制单元,用于:控制该提示音的声像漂移方向为第一方向,该第一方向由该第一档位指向该第二档位。
结合第三方面,在第三方面的某些实现方式中,该传感信息用于指示第一功能启动或者关闭,该控制单元,用于:控制该提示音的声像漂移方向为第二方向,该第二方向为该运载工具所在平面的法向。
结合第三方面,在第三方面的某些实现方式中,该传感信息用于指示向第三方向转向,该控制单元,用于:控制该提示音的声像漂移方向为该第三方向。
结合第三方面,在第三方面的某些实现方式中,该控制单元,还用于:控制氛围灯点亮的方向,该氛围灯点亮的方向与该声像漂移方向相对应,该运载工具包括该氛围灯。
结合第三方面,在第三方面的某些实现方式中,该控制单元,用于:根据该传感信息,控制该提示音的声像漂移速度。
结合第三方面,在第三方面的某些实现方式中,该传感信息用于指示加速踏板或者制动踏板的开度变化,该控制单元,用于:根据该加速踏板或者该制动踏板的开度变化,控制该提示音的声像漂移速度。
结合第三方面,在第三方面的某些实现方式中,该传感信息用于指示用户在显示屏上的滑动输入,该控制单元,用于:根据该滑动输入的速度,控制该提示音的声像漂移速度。
结合第三方面,在第三方面的某些实现方式中,该装置还包括:第一确定单元,用于根据用户所在的区域,确定该至少两个发声装置。
结合第三方面,在第三方面的某些实现方式中,该运载工具中包括用户输入与发声装置的映射关系,该装置还包括:第二确定单元,用于根据该映射关系和该传感信息,确定该至少两个发声装置。
第四方面,提供了一种控制装置,该控制装置包括:获取单元,用于获取传感信息;控制单元,用于根据该传感信息,控制该运载工具中至少两个不同位置的发声装置发出声像漂移的提示音,该提示音用于提示该运载工具所处环境的状态变化,或者,该提示音用于提示待提示对象的相对位置,或者,该提示音用于提示对用户的生物特征识别结果。
第五方面,提供了一种控制装置,该控制装置包括处理单元和存储单元,其中存储单元用于存储指令,处理单元执行存储单元所存储的指令,以使该控制装置执行第一方面或者第二方面中任一种可能的方法。
第六方面,提供了一种控制***,该***包括至少两个发声装置和计算平台,其中,该计算平台包括第三方面或者第四方面中任一种可能的装置,或者,该计算平台包括第五方面所述的装置。
在一些可能的实现方式中,该控制***还包括一个或者多个传感器。
第七方面,提供了一种运载工具,该运载工具包括第三方面中任一种可能的装置,或者,包括第四方面所述的装置,或者,包括第五方面所述的装置,或者,包括第六方面所述的控制***。
在一些可能的实现方式中,该运载工具为车辆。
第八方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述第一方面或者第二方面中任一种可能的方法。
需要说明的是,上述计算机程序代码可以全部或者部分存储在第一存储介质上,其中第一存储介质可以与处理器封装在一起的,也可以与处理器单独封装,本申请实施例对此不作具体限定。
第九方面,提供了一种计算机可读介质,所述计算机可读介质存储有程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述第一方面或者第二方面中任一种可能的方法。
第十方面,本申请实施例提供了一种芯片***,该芯片***包括处理器,用于调用存储器中存储的计算机程序或计算机指令,以使得该处理器执行上述第一方面或者第二方面中任一种可能的方法。
结合第十方面,在一种可能的实现方式中,该处理器通过接口与存储器耦合。
结合第十方面,在一种可能的实现方式中,该芯片***还包括存储器,该存储器中存储有计算机程序或计算机指令。
本申请实施例中,根据获取的传感信息,可以通过声像漂移的提示音向用户提示运载工具的状态变化。这样,用户可以通过声像漂移的提示音直观地明确人机交互结果与运载工具的状态变化,无需用户借助其他设备或者装置确定人机交互结果,有助于降低用户对人机交互结果的认知负荷,从而有助于提升用户的人机交互体验。
通过传感信息控制提示音的声像漂移方向,使得提示音的声像漂移方向与用户输入相对应。这样,可以进一步让用户明确人机交互结果与运载工具的状态变化,有助于提升用户的人机交互体验。
通过传感信息控制提示音的声像漂移速度,使得提示音的声像漂移速度模拟传感信息指示的用户输入的速度,有助于进一步提升用户的人机交互体验。
通过用户所在的区域可以确定该至少两个发声装置。这样,可以控制用户所在的区域的至少两个不同位置的发声装置发出的声像漂移的提示音,从而进一步让用户在其所处区域中明确人机交互结果与运载工具的状态变化,有助于提升用户的人机交互体验。
通过用户输入与发声装置的映射关系以及该传感信息确定该至少两个发声装置。这样,可以降低确定该至少两个发声装置时的计算开销,有助于节省运载工具的功耗。
附图说明
图1是本申请实施例提供的运载工具的功能框图示意。
图2是本申请实施例提供的应用场景的示意图。
图3是本申请实施例提供的应用场景的另一示意图。
图4是本申请实施例提供的应用场景的另一示意图。
图5是本申请实施例提供的应用场景的另一示意图。
图6是本申请实施例提供的应用场景的另一示意图。
图7是本申请实施例提供的应用场景的另一示意图。
图8是本申请实施例提供的应用场景的另一示意图。
图9是本申请实施例提供的应用场景的另一示意图。
图10是本申请实施例提供的应用场景的另一示意图。
图11是本申请实施例提供的应用场景的另一示意图。
图12是本申请实施例提供的应用场景的另一示意图。
图13是本申请实施例提供的车辆中扬声器分布的示意图。
图14是本申请实施例提供的控制方法的示意性流程图。
图15是本申请实施例提供的控制方法的另一示意性流程图。
图16是本申请实施例提供的一种控制装置的示意性框图。
图17是本申请实施例提供的控制***的示意性框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本申请实施例中采用诸如“第一”、“第二”的前缀词,仅仅为了区分不同的描述对象,对被描述对象的位置、顺序、优先级、数量或内容等没有限定作用。本申请实施例中对序数词等用于区分描述对象的前缀词的使用不对所描述对象构成限制,对所描述对象的陈述参见权利要求或实施例中上下文的描述,不应因为使用这种前缀词而构成多余的限制。此外,在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
如前所述,当前驾驶员在车辆中执行某些操作后,仍须通过仪表显示屏或者中控屏上的图像或者文本信息,确认人和车辆的交互结果有效。例如,驾驶员在将车辆的档位从P档切换至D档后,还需要通过仪表显示屏上显示的文本信息“D”来确认车辆已经切换至D档。这样增加了驾驶员对人机交互结果的认 知负荷,对用户的人机交互体验造成影响。
本申请实施例提供了一种控制方法、装置和运载工具,根据获取的传感信息,可以通过声像漂移的提示音向用户提示运载工具的状态变化。这样,用户可以通过提示音直观地明确人机交互结果与运载工具的状态变化,有助于提升用户的人机交互体验。
图1是本申请实施例提供的运载工具100的一个功能框图示意。运载工具100可以包括感知***120、显示装置130、发声装置140和计算平台150,其中,感知***120可以包括感测关于运载工具100周边的环境的信息的一种或多种传感器。例如,感知***120可以包括定位***,定位***可以是全球定位***(global positioning system,GPS),也可以是北斗***或者其他定位***。感知***120还可以包括惯性测量单元(inertial measurement unit,IMU)、激光雷达、毫米波雷达、超声雷达以及摄像装置中的一种或者多种。
运载工具100的部分或所有功能可以由计算平台150控制。计算平台150可包括一个或多个处理器,例如处理器151至15n(n为正整数),处理器是一种具有信号的处理能力的电路,在一种实现中,处理器可以是具有指令读取与运行能力的电路,例如中央处理单元(central processing unit,CPU)、微处理器、图形处理器(graphics processing unit,GPU)(可以理解为一种微处理器)、或数字信号处理器(digital signal processor,DSP)等;在另一种实现中,处理器可以通过硬件电路的逻辑关系实现一定功能,该硬件电路的逻辑关系是固定的或可以重构的,例如处理器为专用集成电路(application-specific integrated circuit,ASIC)或可编程逻辑器件(programmable logic device,PLD)实现的硬件电路,例如现场可编程门阵列(field programmable gate array,FPGA)。在可重构的硬件电路中,处理器加载配置文档,实现硬件电路配置的过程,可以理解为处理器加载指令,以实现以上部分或全部单元的功能的过程。此外,处理器还可以是针对人工智能设计的硬件电路,其可以理解为一种ASIC,例如神经网络处理单元(neural network processing unit,NPU)、张量处理单元(tensor processing unit,TPU)、深度学***台150还可以包括存储器,存储器用于存储指令,处理器151至15n中的部分或全部处理器可以调用存储器中的指令,执行指令,以实现相应的功能。
座舱内的显示装置130主要分为两类,第一类是车载显示屏;第二类是投影显示屏,例如抬头显示装置(head up display,HUD)。车载显示屏是一种物理显示屏,是车载信息娱乐***的重要组成部分,座舱内可以设置有多块显示屏,如数字仪表显示屏,中控屏,副驾驶位上的乘客(也称为前排乘客)面前的显示屏,左侧后排乘客面前的显示屏以及右侧后排乘客面前的显示屏,甚至是车窗也可以作为显示屏进行显示。抬头显示,也称平视显示***。主要用于在驾驶员前方的显示设备(例如挡风玻璃)上显示例如时速、导航等驾驶信息。以降低驾驶员视线转移时间,避免因驾驶员视线转移而导致的瞳孔变化,提升行驶安全性和舒适性。HUD例如包括组合型抬头显示(combiner-HUD,C-HUD)***、风挡型抬头显示(windshield-HUD,W-HUD)***、增强现实型抬头显示***(augmented reality HUD,AR-HUD)。应理解,HUD也可以随着技术演进出现其他类型的***,本申请对此不作限定。
发声装置140可以为扬声器、音箱或者喇叭等。
图2示出了本申请实施例提供的一种应用场景的示意图。
如图2中的(a)所示,在通过档位传感器采集的传感信息确定用户执行从D档切换至P档的操作时,车辆可以控制扬声器201和扬声器202发出声像漂移的提示音,该提示音用于提示车辆的档位状态发生变化。
例如,计算平台150可以获取档位传感器采集的档位信息,该档位信息用于指示从D档切换至P档。计算平台150可以根据该档位信息,控制车辆的档位切换至P档且控制扬声器201和扬声器202发出声像漂移的提示音。
一个实施例中,在通过档位传感器采集的传感信息确定用户执行从D档切换至P档的操作时,车辆可以控制扬声器201和扬声器202发出声像漂移的提示音,包括:在通过档位传感器采集的传感信息确定用户执行从D档切换至P档的操作且从切换至P档起的预设时长内未检测到用户切换档位的操作时,车辆可以控制扬声器201和扬声器202发出声像漂移的提示音。
示例性的,该预设时长为0.5秒(second,s)。
一个实施例中,车辆还可以根据该传感信息,控制该提示音的声像漂移方向。例如,可以控制该提示音的声像漂移方向为从扬声器202的位置指向扬声器201的位置。
一个实施例中,车辆可以控制扬声器201和扬声器202发出声像漂移的提示音,包括:车辆控制扬声器201和扬声器202的播放强度。
例如,在T1时刻,可以控制扬声器201发出的提示音的播放强度为20dB且控制扬声器202发出的提示音的播放强度为40dB;在T1时刻之后的T2时刻,可以控制扬声器201发出的提示音的播放强度为40dB且控制扬声器202发出的提示音的播放强度为20dB。从而可以控制扬声器201和扬声器202发出声像漂移方向为由扬声器202的位置指向扬声器201的位置的提示音。
示例性的,该T1时刻和T2时刻之间的时间间隔可以为100毫秒(millisecond,ms)。
一个实施例中,该T1时刻可以为计算平台150从档位传感器获取到该传感信息的时刻。
一个实施例中,车辆可以控制扬声器201和扬声器202发出声像漂移的提示音,包括:车辆控制扬声器201和扬声器202的时延。
例如,在T1时刻,可以控制扬声器202发出的提示音的播放强度为20dB且控制扬声器201不发出提示音;在T1+△T时刻控制扬声器201发出的提示音的播放强度为20dB且控制扬声器202不发出提示音。从而可以控制扬声器201和扬声器202发出声像漂移方向为由扬声器202的位置指向扬声器201的位置的提示音。
示例性的,△T可以为20ms。
一个实施例中,车辆可以控制扬声器201和扬声器202发出声像漂移的提示音,包括:车辆控制扬声器201和扬声器202的时延和播放强度。
例如,在T1时刻,在可以控制扬声器202发出的提示音的播放强度为40dB且控制扬声器201不发出提示音;在T1+△T时刻控制扬声器201发出的提示音的播放强度为20dB且控制扬声器202不发出提示音。在T1时刻之后的T2时刻,控制扬声器202发出的提示音的播放强度为20dB且控制扬声器201不发出提示音;在T2+△T时刻控制扬声器202不发出提示音且控制扬声器201发出的提示音的播放强度为40dB。从而可以控制扬声器201和扬声器202发出声像漂移方向为由扬声器202的位置指向扬声器201的位置的提示音。
以上示例中是以扬声器201和扬声器202为例进行说明的,本申请实施例对发出声像漂移的提示音的发声装置的个数并不限定。例如,还可以是通过3个或者3个以上的扬声器发出声像漂移的提示音。
一个实施例中,在通过档位传感器采集的传感信息确定用户执行从D档切换至倒车档(R档)的操作时,可以控制扬声器201和扬声器202发出声像漂移的提示音。示例性的,该提示音的声像漂移方向为从扬声器202的位置指向扬声器201的位置。
一个实施例中,在通过档位传感器采集的传感信息确定用户执行从D档切换至空档(N档)的操作时,可以控制扬声器201和扬声器202发出声像漂移的提示音。示例性的,该提示音的声像漂移方向为从扬声器202的位置指向扬声器201的位置。
如图2中的(b)所示,在通过档位传感器采集的传感信息确定用户执行从P档切换至D档的操作,车辆可以控制扬声器201和扬声器202发出声像漂移的提示音,该提示音用于提示车辆的档位状态发生变化。
一个实施例中,车辆还可以根据该传感信息,控制该提示音的声像漂移方向。例如,该提示音的声像漂移方向为从扬声器201的位置指向扬声器202的位置。
应理解,控制扬声器201和扬声器202发出声像漂移方向为从扬声器201的位置指向扬声器202的位置的提示音的过程可以参考上述实施例中的描述,此处不再赘述。
以上图2中的(a)和(b)是以通过直排式档位为例进行说明的,本申请实施例并不限于此。例如,还可以是蛇形档位、旋钮档位或者怀档等。
如图2中的(c)所示,车载显示屏上包括显示区域203。在通过车载显示屏203上的触摸传感器采集的触摸数据确定用户在显示区域203从下往上的滑动输入时,车辆可以控制扬声器204和扬声器205发出声像漂移的提示音,该提示音用于提示车辆的档位状态发生变化。
示例性的,车载显示屏中包括扬声器204和扬声器205。在触摸传感器采集的触摸数据指示用户手指在显示区域203中从下往上的滑动操作时,车辆可以切换至D档且控制扬声器204和扬声器205发出声像漂移的提示音,该提示音用于提示车辆的档位发生变化。
一个实施例中,在该触摸数据指示用户手指在显示区域203中从下往上的滑动输入时,可以控制该提示音的声像漂移方向为从扬声器205的位置指向扬声器204的位置。
应理解,控制扬声器204和扬声器205发出声像漂移方向为从扬声器205的位置指向扬声器204的位置的提示音的过程可以参考上述实施例中的描述,此处不再赘述。
如图2中的(d)所示,在通过车载显示屏203上的触摸传感器采集的触摸数据确定用户手指在显示区域203中从上往下的滑动输入时,车辆可以控制扬声器204和扬声器205发出声像漂移的提示音,该提示音用于提示车辆的档位状态发生变化。
一个实施例中,在该触摸数据指示在显示区域203中从上往下的滑动输入时,可以控制该提示音的声像漂移方向为从扬声器204的位置指向扬声器205的位置。
以上图2中的(c)和(d)是以通过车载显示屏上滑动输入执行换档操作为例进行说明的,本申请实施例并不限于此。例如,还可以通过检测预设手势的方式执行换档操作。例如,在检测到当前车辆处于P档且通过车辆座舱内的摄像头获取的图像确定驾驶员比出第一预设手势(例如,比耶手势或者举起两根手指时的手势)时,可以控制档位从P档切换至D档且控制座舱内的至少两个发声装置发出声像漂移的提示音,该提示音用于提示车辆的档位发生变化。
又例如,当检测到当前车辆处于P档且通过刹车踏板传感器获取的传感信息确定刹车踏板的开度大于或者等于第一开度阈值的时长大于或者等于第一预设时长时,可以控制档位从P档切换至D档且控制座舱内的至少两个发声装置发出声像漂移的提示音,该提示音用于提示车辆的档位发生变化。
示例性的,该第一开度阈值为50%。
示例性的,该第一预设时长为5秒(second,s)。
又例如,在检测到当前车辆处于P档且通过座舱外的传感器(例如,摄像头、激光雷达或者毫米波雷达中的一种或者多种)采集的数据确定车辆尾部有障碍物时,可以控制档位从P档切换至D档且控制座舱内的至少两个发声装置发出声像漂移的提示音,该提示音用于提示车辆的档位发生变化。
又例如,在检测到当前车辆处于P档且通过语音传感器采集的语音信息确定用户发出切换至D档的语音指令时,可以控制档位从P档切换至D档且控制座舱内的至少两个发声装置发出声像漂移的提示音,该提示音用于提示车辆的档位发生变化。
一个实施例中,可以控制至少两个发声装置周期性地发出声像漂移的提示音。以图2中的(a)所示的场景为例,在通过档位传感器采集的传感信息确定用户执行从D档切换至P档的操作时,车辆可以控制扬声器201和扬声器202发出声像漂移的提示音且该提示音的播报周期为200ms。车辆可以控制扬声器201和扬声器202进行1s(或者,5个播报周期)的播报。
以上扬声器201、扬声器202、扬声器204或者扬声器205的位置可以包括一个或者多个扬声器。例如,当该扬声器201的位置上包括多个扬声器时,该多个扬声器可以组成扬声器群。
图3示出了本申请实施例提供的另一种应用场景的示意图。
如图3中的(a)所示,在通过方向盘上的物理按键传感器采集的传感信息确定驾驶员开启功能1的操作时,可以控制至少两个不同位置的发声装置发出声像漂移的提示音,该提示音用于提示车辆的功能1开启。
一个实施例中,该提示音的声像漂移方向为该车辆所在平面的法向。例如,该提示音的声像漂移方向为从扬声器301的位置指向扬声器302的位置。
以上是以功能1启动时控制该提示音的声像漂移方向为从扬声器301的位置指向扬声器302的位置为例进行说明的,本申请实施例并不限于此。例如,还可以在通过方向盘上的物理按键传感器采集的传感信息确定驾驶员开启功能1的操作时,控制扬声器201和扬声器202发出声像漂移的提示音,从而提示用户车辆的功能1启动。
如图3中的(b)所示,在通过方向盘上的物理按键传感器采集的传感信息确定驾驶员关闭功能1的操作时,可以控制至少两个不同位置的发声装置发出声像漂移的提示音,该提示音用于提示车辆的功能1关闭。
一个实施例中,该提示音的声像漂移方向为该车辆所在平面的法向。例如,该提示音的声像漂移方向为从扬声器302的位置指向扬声器301的位置。
应理解,控制扬声器301和扬声器302发出声像漂移方向为从扬声器301的位置指向扬声器302的位置的提示音,或者,发出声像漂移方向为从扬声器302的位置指向扬声器301的位置的提示音的过程可以参考上述实施例中的描述,此处不再赘述。
一个实施例中,该功能1包括但不限于全自动驾驶功能的开启或者关闭、高级驾驶辅助*** (advanced driving assistant system,ADAS)功能、中级驾驶辅助***功能、低级驾驶辅助***功能、混成自动电压控制(hybrid automatic voltage control,HAVC)功能或者自适应巡航控制(adaptive cruise control,ACC)功能等。
以上是以驾驶员点击方向盘上的物理按键启动或者关闭功能1为例进行说明的,本申请实施例并不限于此。示例性的,还可以是在通过其他部件上的物理按键传感器采集的传感信息确定用户开启或者关闭某个功能时,控制至少两个不同位置的发声装置发出声像漂移的提示音。例如,在通过车门上的物理按键传感器采集的传感信息确定用户启动车辆上儿童锁功能后,可以控制至少两个不同位置的发声装置发出声像漂移的提示音,该提示音用于提示儿童锁功能启动。
示例性的,还可以是在通过显示屏上的虚拟按键传感器(例如,触摸传感器)采集的传感信息确定用户开启或者关闭某个功能时,控制至少两个不同位置的发声装置发出声像漂移的提示音。例如,车载显示屏上的某个虚拟按键对应的功能为车道偏离预警(lane departure warning,LDW)功能。在通过车载显示屏上的触摸传感器采集的触摸数据确定用户启动车道偏离预警功能时,可以控制至少两个不同位置的发声装置发出声像漂移的提示音,该提示音用于提示车辆的LDW功能启动。
以上扬声器301或者扬声器302的位置可以包括一个或者多个扬声器。例如,当该扬声器301的位置上包括多个扬声器时,该多个扬声器可以组成扬声器群。
图4示出了本申请实施例提供的另一种应用场景的示意图。
如图4所示,在检测到用户向下拨动转向拨杆的操作时,可以控制车辆的左转向灯闪烁且控制至少两个不同位置的发声装置发出声像漂移的提示音。
一个实施例中,可以通过转向拨杆传感器采集的传感信息确定用户向下拨动转向拨杆。
一个实施例中,在检测到用户向下拨动转向拨杆的操作时,可以控制提示音的声像漂移方向由车辆的副驾位置指向车辆的主驾位置。例如,该声像漂移方向为从扬声器401的位置指向扬声器403的位置。
以上扬声器401、扬声器402或者扬声器403的位置可以包括一个或者多个扬声器。例如,当该扬声器401的位置上包括多个扬声器时,该多个扬声器可以组成扬声器群。例如,扬声器401所在的位置包括3个扬声器,这3个扬声器中的2个扬声器位于副驾车门上且1个扬声器位于副驾侧的A柱上。
一个实施例中,在检测到用户向上拨动转向拨杆的操作时,可以控制车辆的右转向灯闪烁且控制至少两个不同位置的发声装置发出声像漂移的提示音。示例性的,可以控制提示音的声像漂移方向由车辆的主驾位置指向车辆的副驾位置。例如,该声像漂移方向为从扬声器403的位置指向扬声器401的位置。
图5示出了本申请实施例提供的另一种应用场景的示意图。
如图5所示,在通过车载显示屏上的触摸传感器采集的触摸数据确定用户在该车载显示屏上的滑动输入时,可以执行该滑动输入对应的操作且控制至少两个不同位置的发声装置发出声像漂移的提示音,该提示音用于提示车辆在执行该滑动输入对应的操作。
以该车载显示屏显示多个页签中的第一个页签(例如,第一个页签上包括应用程序1至应用程序8的图标)为例,在检测到用户在该车载显示屏上的从右侧至左侧的滑动输入时,可以切换至显示该多个页签中的第二个页签(例如,第二个页签上包括应用程序9至应用程序16的图标)且控制扬声器501和扬声器502发出声像漂移方向为由扬声器501的位置指向扬声器502的位置的提示音。
一个实施例中,还可以根据该滑动输入的速度,控制声像漂移的速度。
示例性的,在根据该触摸数据确定该滑动输入的速度小于预设速度阈值时,可以控制扬声器501和扬声器502发出声像漂移速度较低的提示音。例如,在T3时刻,可以控制扬声器501发出的提示音的播放强度为20dB且控制扬声器502不发出提示音;在T3+2△T时刻控制扬声器502发出的提示音的播放强度为20dB且控制扬声器501不发出提示音。
示例性的,在根据该触摸数据确定该滑动输入的速度大于或者等于预设速度阈值时,可以控制扬声器501和扬声器502发出声像漂移速度较高的提示音。例如,在T3时刻,可以控制扬声器501发出的提示音的播放强度为20dB且控制扬声器502不发出提示音;在T3+△T时刻控制扬声器502发出的提示音的播放强度为20dB且控制扬声器501不发出提示音。
示例性的,该T3时刻为通过触摸传感器采集到触摸数据的时刻。
图6示出了本申请实施例提供的另一种应用场景的示意图。
如图6中的(a)所示,在通过加速踏板传感器采集的传感信息确定加速踏板的开度变化时,可以控制扬声器601和扬声器602发出声像漂移的提示音。
一个实施例中,在加速踏板的开度逐渐变大时,可以控制该提示音的声像漂移方向为由扬声器601的位置指向扬声器602的位置。
一个实施例中,车辆还可以根据该加速踏板的开度变化,控制该提示音的声像漂移速度。
例如,在预设时长内(例如,2秒)内加速踏板的开度变化率小于第一预设变化率阈值时,可以控制扬声器601和扬声器602发出声像漂移速度较低的提示音。
又例如,在该预设时长内加速踏板的开度变化率大于或者等于该第一预设变化率阈值时,可以控制扬声器601和扬声器602发出声像漂移速度较高的提示音。
如图6中的(b)所述,在通过制动踏板传感器采集的传感信息确定制动踏板的开度变化时,可以控制扬声器601和扬声器602发出声像漂移的提示音。
一个实施例中,在制动踏板的开度逐渐变大时,可以控制该提示音的声像漂移方向为由扬声器602的位置指向扬声器601的位置。
一个实施例中,车辆还可以根据该制动踏板的开度变化,控制该提示音的声像漂移速度。
例如,在预设时长内(例如,2秒)内制动踏板的开度变化率小于第二预设变化率阈值时,可以控制扬声器601和扬声器602发出声像漂移速度较低的提示音。
又例如,在该预设时长内制动踏板的开度变化率大于或者等于第二预设变化率阈值时,可以控制扬声器601和扬声器602发出声像漂移速度较高的提示音。
控制扬声器601和扬声器602发出声像漂移速度较低或者较高的提示音的实现过程可以参考上述实施例中的描述,此处不再赘述。
图7示出了本申请实施例提供的另一种应用场景的示意图。
如图7所示,车辆在行驶过程中可以通过HUD显示导航信息、当前电量以及当前车速等信息。在通过座舱外的摄像头采集的图像识别到限速标识牌(例如,限速值为40公里每小时(Kilometer/hour,Km/h))且当前车速大于该限速标识牌上显示的限速值时,可以控制扬声器601和扬声器602发出声像漂移方向为由扬声器602的位置指向扬声器601的位置的提示音,该提示音用于提示车辆即将通过的道路上有限速规定,或者,该提示音用于提示用户控制车辆减速。
一个实施例中,可以根据当前车速与限速值之间的差值,控制该提示音的声像漂移速度。
例如,在当前车速与限速值之间的差值小于第一预设差值时,可以控制扬声器601和扬声器602发出声像漂移速度较低的提示音。
又例如,在当前车速与限速值之间的差值大于或者等于第一预设差值时,可以控制扬声器601和扬声器602发出声像漂移速度较高的提示音。
这样,用户可以通过声像漂移速度获知当前车速与限速值之间的差值是否过大,从而使得用户迅速对车速进行控制,有助于避免由于超速带来的安全隐患,也有助于避免由于超速给用户带来的罚款。
图8示出了本申请实施例提供的另一种应用场景的示意图。
如图8所示,用户驾驶车辆在十字路灯等待通行。在通过座舱外的摄像头采集的图像确定交通指示灯由红色变为绿色时,可以控制扬声器601和扬声器602发出声像漂移的提示音,该提示音可以用于提示车辆所处环境中的交通指示灯发生变化,或者,可以用于指示用户驾驶车辆通过该路口。
一个实施例中,在通过座舱外的摄像头采集的图像确定交通指示灯由红色变为绿色且在检测到交通指示灯由红色变为绿色起的预设时长(例如,10秒)内未检测到用户驾驶车辆前进时,可以控制扬声器601和扬声器602发出声像漂移的提示音。
一个实施例中,在通过座舱外的摄像头采集的图像确定交通指示灯由红色变为绿色且通过座舱内的摄像头采集的图像确定用户的视线不在车辆的前进方向时,可以控制扬声器601和扬声器602发出声像漂移的提示音。
这样,在交通指示灯指示可以通行时,如果用户的注意力不在交通指示灯上,可以通过声像漂移的提示音提示用户快速通过该路口,有助于避免由于等待时间过长而导致的压车现象,也有助于节省用户的通行时间。
一个实施例中,该提示音的声像漂移方向为由扬声器601的位置指向扬声器602的位置。
图9示出了本申请实施例提供的另一种应用场景的示意图。
如图9所示,用户驾驶车辆在拥堵路段行驶时,与前方车辆的跟车距离较近。在通过座舱外的传感器采集的传感信息确定前方车辆向前移动时,可以控制扬声器601和扬声器602发出声像漂移的提示音, 该提示音可以用于提示前方车辆向前移动,或者,可以用于指示用户驾驶车辆向前移动。
一个实施例中,在通过座舱外的传感器采集的传感信息确定前方车辆向前移动且与车辆与该前方车辆之间的距离大于或者等于预设距离(例如,10米)时,可以控制扬声器601和扬声器602发出声像漂移的提示音。
一个实施例中,在通过座舱外的传感器采集的传感信息确定前方车辆向前移动且通过座舱内的摄像头采集的图像确定用户的视线不在车辆的前进方向时,可以控制扬声器601和扬声器602发出声像漂移的提示音。
这样,在前方车辆移动时,如果用户的注意力不在前进方向上,可以通过声像漂移的提示音提示用户紧跟前方车辆,有助于避免拥堵路段上的压车现象。
以上通过图8和图9介绍了通过声像漂移方向为由扬声器601的位置指向扬声器602的位置的提示音提示交通指示灯变化或者前方车辆向前移动的场景,本申请实施例并不限于此。例如,还可以通过声像漂移的提示音提示用户座舱外的其他环境变化情况。
示例性的,用户驾驶车辆驶离停车场(停车场的出口为斜坡)。在通过座舱外的传感器采集的传感信息确定前方车辆溜坡时,可以通过声像漂移的提示音提示前方车辆正在溜坡。例如,该提示音的声像漂移方向为由扬声器602的位置指向扬声器601的位置。
一个实施例中,还可以根据车辆与处于溜坡状态的车辆之间的距离,控制该提示音的声像漂移速度。例如,在车辆与处于溜坡状态的车辆之间的距离大于预设距离(例如,5米)时,可以控制扬声器601和扬声器602发出声像漂移速度较低的提示音;在车辆与处于溜坡状态的车辆之间的距离小于或者等于预设距离时,可以控制扬声器601和扬声器602发出声像漂移速度较高的提示音。
图10示出了本申请实施例提供的另一种应用场景的示意图。
如图10所示,在检测到用户将手机放置无线充电区域且成功对手机充电时,可以控制至少两个发声装置发出声像漂移的提示音,该提示音用于提示手机无线充电成功。
示例性的,在检测到用户将手机放置无线充电区域且成功对手机充电时,可以控制扬声器301和扬声器302发出声像漂移的提示音,该提示音的声像漂移方向为从扬声器301的位置指向扬声器302的位置。
一个实施例中,在检测到用户将手机放置无线充电区域且手机无线充电失败时,可以控制至少两个发声装置发出声像漂移的提示音,该提示音用于提示手机无线充电失败。
示例性的,在检测到用户将手机放置无线充电区域且手机无线充电失败时,可以控制扬声器301和扬声器302发出声像漂移的提示音,该提示音的声像漂移方向为从扬声器302的位置指向扬声器301的位置。
图11示出了本申请实施例提供的另一种应用场景的示意图。
如图11所示,在检测到主驾区域有用户时,车辆可以启动座舱内的摄像头对主驾区域的用户进行人脸识别。在人脸识别成功时,可以控制扬声器1110和扬声器1120发出声像漂移的提示音,该提示音用于提示人脸识别成功。
一个实施例中,在人脸识别成功时,还可以控制语音装置发出提示音“人脸识别成功”。
一个实施例中,在人脸识别成功时,可以控制该提示音的声像漂移方向为从扬声器1110的位置指向扬声器1120的位置。
在人脸识别失败时,可以控制扬声器1110和扬声器1120发出声像漂移的提示音,该提示音用于提示用户人脸识别失败。
一个实施例中,在人脸识别失败时,还可以控制语音装置发出语音信息“人脸识别失败”。
一个实施例中,在人脸识别失败时,可以控制该提示音的声像漂移方向为从扬声器1120的位置指向扬声器1110的位置。
以上是以对用户的人脸识别为例进行说明的,本申请实施例并不限于此。例如,还可以在对用户的指纹、虹膜等其他生物特征识别成功或者失败时,控制至少两个不同位置的发声装置发出声像漂移的提示音。
图12示出了本申请实施例提供的另一种应用场景的示意图。
如图12所示,在驾驶员准备驾驶车辆行驶且通过安全带状态传感器采集的传感信息确定二排右侧的用户未系安全带时,可以控制扬声器1210和扬声器1220发出声像漂移的提示音,该提示音用于提示二排右侧的乘客未系安全带。
一个实施例中,该提示音的声像漂移方向可以由扬声器1210的位置指向扬声器1220的位置,或者,也可以由扬声器1220的位置指向扬声器1210的位置。
以上扬声器1210或者扬声器1220的位置可以包括一个或者多个扬声器。当该扬声器1210的位置上包括多个扬声器时,该多个扬声器可以组成扬声器群。例如,扬声器1210所在的位置包括3个扬声器,这3个扬声器中的2个扬声器位于二排右侧的车门上且1个扬声器位于二排右侧的C柱上。
这样,通过声像漂移的提示音可以提示驾驶员座舱内未系安全带的用户的位置,从而使得驾驶员及时提示该用户系好安全带,有助于提升用户的驾乘体验。
图13示出了本申请实施例提供的车辆中扬声器分布的示意图。如图13所示,车辆中可以包括扬声器1至扬声器10。其中,扬声器1和扬声器5可以位于车辆的主驾区域,扬声器3和扬声器7可以位于车辆的副驾区域,扬声器2和扬声器6可以位于车辆的二排左侧区域,扬声器4和扬声器8可以位于车辆的二排右侧区域,扬声器9和扬声器10可以位于车辆的档位附近。
一个实施例中,车辆可以根据用户所在的区域,确定该至少两个发声装置。
示例性的,在检测到主驾区域的用户启动功能1时,可以控制扬声器1和扬声器5发出声像漂移的提示音。
示例性的,在检测到二排左侧的用户启动功能2时,可以控制扬声器2和扬声器6发出声像漂移的提示音。
一个实施例中,车辆中保存有用户输入和发声装置的映射关系,该车辆可以根据该映射关系和该传感信息,确定该至少两个发声装置。
示例性的,表1示出了一种用户输入、发声装置与声像漂移方向的映射关系。
表1
示例性的,在通过档位传感器采集的传感信息确定用户将档位从P档切换至D档的操作时,可以根据上述表1的映射关系和该传感信息,确定控制扬声器9和扬声器10发出声像漂移的提示音。可选地,还可以根据该映射关系和该传感器信息,控制该提示音的声像漂移方向为由扬声器9的位置指向扬声器10的位置。
示例性的,在通过加速踏板传感器采集的传感信息确定加速踏板的开度逐渐变大时,可以根据上述 表1的映射关系和该传感信息,确定控制扬声器5和扬声器6发出声像漂移的提示音。可选地,还可以根据该映射关系和该传感器信息,控制该提示音的声像漂移方向为由扬声器6的位置指向扬声器5的位置。
以上表1仅仅是示意性的,本申请实施例并不限于此。例如,在用户的输入为拨动转向拨杆(左转向),该可以控制扬声器1和扬声器3发出声像漂移的提示音。
一个实施例中,车辆中保存有用户输入和发声装置的映射关系,该车辆根据该映射关系和该传感信息,确定该至少两个发声装置,包括:该车辆根据该映射关系、该传感信息以及用户所在的位置,确定该至少两个发声装置。
示例性的,表2示出了一种用户所在区域、用户输入、发声装置与声像漂移方向的映射关系。
表2
示例性的,在通过麦克风阵列接收的语音指令(例如,“开启座椅通风功能”)确定位于副驾区域的用户发出该语音指令。根据上述表2所述的映射关系、该语音指令以及用户所在的区域,可以控制扬声器3和扬声器7发出声像漂移的提示音。可选地,还可以根据该映射关系,控制该提示音的声像漂移方向为由扬声器3的位置指向扬声器7的位置。
图14示出了本申请实施例提供的控制方法1400的示意性流程图。该方法1400可以由运载工具(例如,车辆)执行,或者,该方法1400可以由上述计算平台执行,或者,该方法1400可以由计算平台和至少两个发声装置组成的***执行,或者,该方法1400可以由上述计算平台中的片上***(system-on-a-chip,SoC)执行,或者,该方法1400可以由计算平台中的处理器执行。该方法1400包括:
S1410,获取传感信息。
示例性的,该获取传感信息,包括:计算平台获取运载工具中的传感器采集的传感信息。例如,该传感器可以为档位传感器、物理按键或者实体按键对应的传感器、方向盘拨杆传感器、语音传感器、加速踏板传感器、制动踏板传感器或者触摸传感器。
示例性的,该档位传感器采集的传感信息可以用于指示档位的变化。
示例性的,该物理按键对应的传感器采集的传感信息可以用于指示开启或者关闭对应的功能。
示例性的,方向偏拨杆传感器采集的传感信息可以用于指示向某个方向转向。
示例性的,该语音传感器采集的传感信息中包括用户的语音指令,该语音指令用于指示运载工具执 行相应的操作。
示例性的,该加速踏板传感器采集的传感信息用于指示加速踏板的开度变化信息。
示例性的,该制动踏板传感器采集的传感信息用于指示制动踏板的开度变化信息。
示例性的,该触摸传感器采集的传感信息用于指示用户点击了某个虚拟按键(该虚拟按键可以对应于某个功能的开启或者关闭),或者,该传感信息用于指示用户在车载显示屏上滑动输入的方向以及速度。
S1420,根据该传感信息,控制该运载工具中至少两个不同位置的发声装置发出声像漂移的提示音,该提示音用于提示该运载工具的状态变化。
可选地,该方法1400还包括:根据该传感信息,控制该提示音的声像漂移方向。
可选地,该传感信息用于指示从第一档位调整至第二档位,该根据该传感信息,控制该提示音的声像漂移方向,包括:控制该提示音的声像漂移方向为第一方向,该第一方向由该第一档位指向该第二档位。
示例性的,如图2中的(a)所示,在档位传感器采集的传感信息指示从D档切换至P档时,可以控制扬声器201和扬声器202发出声像漂移的提示音,该提示音的声像漂移方向由扬声器202的位置指向扬声器201的位置。
以上由扬声器202的位置指向扬声器201的位置的方向与由D档指向P档的方向相对应。
可选地,该传感信息用于指示第一功能启动或者关闭,该根据该传感信息,控制该提示音的声像漂移方向,包括:控制该提示音的声像漂移方向为第二方向,该第二方向为该运载工具所在平面的法向。
示例性的,如图3中的(a)所示,在物理按键传感器采集的传感信息指示开启功能1时,可以控制扬声器301和扬声器302发出声像漂移的提示音,该提示音的声像漂移方向可以由扬声器301的位置指向扬声器302的位置。
以上由扬声器301的位置指向扬声器302的位置的方向与该运载工具所在平面的法向相对应。
可选地,该传感信息中包括语音指令,该语音指令用于指示第二功能开启,该根据该传感信息,控制该提示音的声像漂移方向,包括:控制该提示音的声像漂移方向为第二方向,该第二方向为该运载工具所在平面的法向。
可选地,该传感信息用于指示向第三方向转向,该根据该传感信息,控制该提示音的声像漂移方向,包括:控制该提示音的声像漂移方向为该第三方向。
示例性的,如图4所示,在方向偏拨杆传感器采集的传感信息指示用户向下拨动拨杆(向左转向)时,可以控制扬声器401、扬声器402和扬声器403发出声像漂移的提示音,该提示音的声像漂移方向由扬声器401的位置指向扬声器403的位置。
可选地,该方法1400还包括:控制氛围灯点亮的方向,该氛围灯点亮的方向与该声像漂移方向相对应,该运载工具包括该氛围灯。
以图4所示的场景为例,在方向偏拨杆传感器采集的传感信息指示用户向下拨动拨杆(向左转向)时,还可以控制氛围灯点亮的方向与该声像漂移方向相对应。例如,车辆的座舱内包括贯穿主驾区域和副驾区域的氛围灯,可以控制该氛围灯的点亮方向为从副驾区域至主驾区域依次点亮。
可选地,该方法1400还包括:根据该传感信息,控制该提示音的声像漂移速度。
可选地,该传感信息用于指示加速踏板或者制动踏板的开度变化,该根据该传感信息,控制该提示音的声像漂移速度,包括:根据该加速踏板或者该制动踏板的开度变化,控制该提示音的声像漂移速度。
示例性的,如图6中的(a)所示,在加速踏板传感器采集的传感信息指示加速踏板的开度逐渐增大时,可以控制提示音的声像漂移速度逐渐增大。可选地,该提示音的声像漂移方向为由车辆的尾部指向车辆的头部。
示例性的,如图6中的(b)所示,在制动踏板传感器采集的传感信息指示制动踏板的开度逐渐增大时,可以控制提示音的声像漂移速度逐渐增大。可选地,该提示音的声像漂移方向为由车辆的头部指向车辆的尾部。
可选地,该传感信息用于指示用户在显示屏上的滑动输入,该根据该传感信息,控制该提示音的声像漂移速度,包括:根据该滑动输入的速度,控制该提示音的声像漂移速度。
示例性的,如图5所示,在触摸传感器采集的传感信息指示用户在车载显示屏上的滑动输入时,可以根据该滑动输入的速度,控制扬声器501和扬声器502发出的提示音的声像漂移速度。
可选地,该方法1400还包括:根据用户所在的区域,确定该至少两个发声装置。
示例性的,在传感信息为麦克风阵列采集的语音信息时,可以通过麦克风阵列采集的语音信息确定声源位置。例如,在该声源位置为主驾区域时,可以从主驾区域中的多个发声装置中确定该至少两个发声装置。
示例性的,该传感信息为触摸传感器采集的传感信息且该传感信息用于指示用户点击显示屏上的虚拟按键(开启或者关闭某个功能)。可以通过座舱内的摄像头采集的图像确定副驾区域的用户点击了该虚拟按键,从而可以从副驾区域中的多个发声装置中确定该至少两个发声装置。
可选地,该运载工具中包括用户输入与发声装置的映射关系,该方法1400还包括:根据该映射关系和该传感信息,确定该至少两个发声装置。
示例性的,该映射关系可以如表1或者表2所示。可以根据该映射关系和该传感信息,确定该至少两个发声装置。
以上根据该映射关系和该传感信息,确定该至少两个发声装置也可以理解为根据该映射关系和该传感信息指示的用户输入,确定该至少两个发声装置。
图15示出了本申请实施例提供的控制方法1500的示意性流程图。该方法1500可以由运载工具(例如,车辆)执行,或者,该方法1500可以由上述计算平台执行,或者,该方法1500可以由计算平台和抬头显示装置组成的***执行,或者,该方法1500可以由上述计算平台中的SoC执行,或者,该方法1500可以由计算平台中的处理器执行。该方法1500包括:
S1510,获取传感信息。
示例性的,该获取传感信息,包括:计算平台获取运载工具中的传感器采集的传感信息。例如,该传感器可以为座舱外的传感器(例如,摄像头、激光雷达、毫米波雷达或者厘米波雷达中的一种或者多种)、座舱内的安全带状态传感器、采集用户生物特征的传感器等。
S1520,根据该传感信息,控制该运载工具中至少两个不同位置的发声装置发出声像漂移的提示音,该提示音用于提示该运载工具所处环境的变化,或者,该提示音用于提示待提示对象的位置,或者,该提示音用于提示对用户的生物特征识别成功或者失败,或者,该提示音用于提示该运载工具与移动终端的连接状态,或者,该提示音用于提示该运载工具对移动终端无线充电成功或者失败。
可选地,该传感信息用于指示运载工具所处环境的交通指示灯变化,该提示音用于提示该交通指示灯变化。
以该运载工具是车辆为例,在车辆所处路口的交通指示灯指示处于某个车道的车辆可以通过该路口时,该方法还包括:控制该提示音的声像漂移方向为该车辆前进的方向。
如图8所示,在该传感信息指示交通指示灯由红色变为绿色时,可以控制扬声器601和扬声器602发出声像漂移的提示音,该提示音的声像漂移方向由扬声器601的位置指向扬声器602的位置。
可选地,以该运载工具是车辆为例,该控制该运载工具中至少两个不同位置的发声装置发出声像漂移的提示音,包括:在该交通指示灯指示处于某个车道的车辆可以通过该路口起的预设时长内未检测到车辆前进时,控制该车辆中至少两个不同位置的发声装置发出声像漂移的提示音。
这样,在交通指示灯指示可以通行时,如果用户的注意力不在交通指示灯上,可以通过声像漂移的提示音提示用户快速通过该路口,有助于避免由于等待时间过长而导致的压车现象,也有助于节省用户的通行时间。
可选地,以该运载工具是车辆为例,该传感信息用于指示车辆当前处于拥堵路段且车辆前方的另一车辆向前移动,该提示音用于提示该另一车辆向前移动。
如图9所示,在通过座舱外的传感器采集的传感信息确定车辆处于拥堵路段且前方车辆向前移动时,可以控制扬声器601和扬声器602发出声像漂移的提示音,该提示音可以用于提示前方车辆向前移动,或者,该提示音可以用于提示用户驾驶车辆向前移动。
可选地,该方法1500还包括:控制该提示音的声像漂移方向为该车辆前进的方向。例如,该提示音的声像漂移方向由扬声器601的位置指向扬声器602的位置。
可选地,以该运载工具是车辆为例,该控制该运载工具中至少两个不同位置的发声装置发出声像漂移的提示音,包括:在检测到该前方车辆向前移动且车辆与该前方车辆之间的距离大于或者等于预设距离时,控制该运载工具中至少两个不同位置的发声装置发出声像漂移的提示音。
这样,在前方车辆向前移动时,如果用户的注意力不在车辆的前进方向上,可以通过声像漂移的提 示音提示用户紧跟前方车辆,有助于避免拥堵路段上的压车现象。
可选地,该传感信息为用于采集用户生物特征的传感器(例如,摄像头、指纹传感器、虹膜传感器)且根据该传感信息确定对用户的生物特征识别成功时,可以控制至少两个发声装置发出声像漂移的提示音。可选地,该提示音的声像漂移方向为该运载工具所在平面的法向。
示例性的,如图11所示,在根据座舱内的摄像头采集的图像确定对主驾区域的用户人脸识别成功时,可以控制扬声器1110和扬声器1120发出声像漂移的提示音。可选地,该提示音的声像漂移方向为由扬声器1110的位置指向扬声器1120的位置。
可选地,在对主驾区域的用户人脸识别成功时,可以控制中控屏显示与主驾区域的用户相关的信息。例如,该与主驾区域的用户相关的信息包括与主驾区域的用户相关的应用程序、卡片、壁纸或者动画等。
示例性的,如图13所示,在根据座舱内的摄像头采集的图像确定对副驾区域的用户人脸识别成功时,可以控制扬声器3和扬声器7发出声像漂移的提示音。可选地,该提示音的声像漂移方向为由扬声器3的位置指向扬声器7的位置。
可选地,在对副驾区域的用户人脸识别成功时,可以控制副驾娱乐屏显示与副驾区域的用户相关的信息。例如,该与副驾区域的用户相关的信息包括与副驾区域的用户相关的应用程序、卡片、壁纸或者动画等。
可选地,该传感信息用于指示运载工具中某个区域的用户未系安全带,该方法1500还包括:控制该提示音的声像漂移方向为由该用户所在的区域指向驾驶员所在的区域,或者,由驾驶员所在的区域指向该用户所在的区域。
如图12所示,在驾驶员准备驾驶车辆行驶且通过安全带状态传感器采集的传感信息确定二排右侧的用户未系安全带时,可以控制扬声器1210和扬声器1220发出声像漂移的提示音,该提示音用于提示二排右侧的乘客未系安全带。
这样,在运载工具中某个用户未系安全带时,可以通过声像漂移的提示音向驾驶员提示未系安全带的用户所在的区域,从而帮助驾驶员及时提醒该用户系好安全带,有助于提升用户的人机交互体验。
本申请实施例还提供用于实现以上任一种方法的装置,例如,提供一种装置包括用以实现以上任一种方法中运载工具(例如,车辆),或者,车辆中的计算平台,或者,计算平台中的SoC,或者,计算平台中的处理器所执行的各步骤的单元(或手段)。
图16示出了本申请实施例提供的一种控制装置1600的示意性框图。如图16所示,该控制装置1600包括:获取单元1610,用于获取传感信息;控制单元1620,用于根据该传感信息,控制该运载工具中至少两个不同位置的发声装置发出声像漂移的提示音,该提示音用于提示该运载工具的状态变化,或者,该提示音用于提示该运载工具所处环境的变化,或者,该提示音用于提示待提示对象的位置,或者,该提示音用于提示对用户的生物特征识别成功或者失败,或者,该提示音用于提示该运载工具与移动终端的连接状态,或者,该提示音用于提示该运载工具对移动终端无线充电成功或者失败。
可选地,该控制单元1620,用于:根据该传感信息,控制该提示音的声像漂移方向。
可选地,该传感信息用于指示从第一档位调整至第二档位,该控制单元1620,用于:控制该提示音的声像漂移方向为第一方向,该第一方向由该第一档位指向该第二档位。
可选地,该传感信息用于指示第一功能启动或者关闭,该控制单元1620,用于:控制该提示音的声像漂移方向为第二方向,该第二方向为该运载工具所在平面的法向。
可选地,该传感信息用于指示向第三方向转向,该控制单元1620,用于:控制该提示音的声像漂移方向为该第三方向。
可选地,该控制单元1620,还用于:控制氛围灯点亮的方向,该氛围灯点亮的方向与该声像漂移方向相对应,该运载工具包括该氛围灯。
可选地,该控制单元1620,用于:根据该传感信息,控制该提示音的声像漂移速度。
可选地,该传感信息用于指示加速踏板或者制动踏板的开度变化,该控制单元1620,用于:根据该加速踏板或者该制动踏板的开度变化,控制该提示音的声像漂移速度。
可选地,该传感信息用于指示用户在显示屏上的滑动输入,该控制单元1620,用于:根据该滑动输入的速度,控制该提示音的声像漂移速度。
可选地,该装置1600还包括:第一确定单元,用于根据用户所在的区域,确定该至少两个发声装置。
可选地,该运载工具中包括用户输入与发声装置的映射关系,该装置1600还包括:第二确定单元, 用于根据该映射关系和该传感信息,确定该至少两个发声装置。
例如,该获取单元1610可以是图1中的计算平台或者计算平台中的处理电路、处理器或者控制器。以获取单元1610为计算平台中的处理器151为例,处理器151可以获取传感器信息。例如,处理器151可以获取档位传感器发送的档位信息,该档位信息用于指示从P档切换至D档。
又例如,控制单元1620可以是图1中的计算平台或者计算平台中的处理电路、处理器或者控制器。以控制单元1620为计算平台中的处理器152为例,处理器152可以根据处理器151获取的档位信息,控制该至少两个发声装置发出声像漂移的提示音,该提示音用于提示运载工具的状态变化。
一个实施例中,处理器152还可以控制该提示音的声像漂移方向。例如,在确定该档位信息指示从P档切换至D档时,处理器152可以控制该提示音的声像漂移方向为从扬声器201的位置指向扬声器202的位置。
一个实施例中,处理器152还可以控制该提示音的声像漂移速度。
以上获取单元1610所实现的功能和控制单元1620所实现的功能可以由不同的处理器实现,或者,也可以由相同的处理器实现,本申请实施例对此不作限定。
应理解以上装置中各单元的划分仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。此外,装置中的单元可以以处理器调用软件的形式实现;例如装置包括处理器,处理器与存储器连接,存储器中存储有指令,处理器调用存储器中存储的指令,以实现以上任一种方法或实现该装置各单元的功能,其中处理器例如为通用处理器,例如CPU或微处理器,存储器为装置内的存储器或装置外的存储器。或者,装置中的单元可以以硬件电路的形式实现,可以通过对硬件电路的设计实现部分或全部单元的功能,该硬件电路可以理解为一个或多个处理器;例如,在一种实现中,该硬件电路为ASIC,通过对电路内元件逻辑关系的设计,实现以上部分或全部单元的功能;再如,在另一种实现中,该硬件电路为可以通过PLD实现,以FPGA为例,其可以包括大量逻辑门电路,通过配置文件来配置逻辑门电路之间的连接关系,从而实现以上部分或全部单元的功能。以上装置的所有单元可以全部通过处理器调用软件的形式实现,或全部通过硬件电路的形式实现,或部分通过处理器调用软件的形式实现,剩余部分通过硬件电路的形式实现。
在本申请实施例中,处理器是一种具有信号的处理能力的电路,在一种实现中,处理器可以是具有指令读取与运行能力的电路,例如CPU、微处理器、GPU、或DSP等;在另一种实现中,处理器可以通过硬件电路的逻辑关系实现一定功能,该硬件电路的逻辑关系是固定的或可以重构的,例如处理器为ASIC或PLD实现的硬件电路,例如FPGA。在可重构的硬件电路中,处理器加载配置文档,实现硬件电路配置的过程,可以理解为处理器加载指令,以实现以上部分或全部单元的功能的过程。此外,还可以是针对人工智能设计的硬件电路,其可以理解为一种ASIC,例如NPU、TPU、DPU等。
可见,以上装置中的各单元可以是被配置成实施以上方法的一个或多个处理器(或处理电路),例如:CPU、GPU、NPU、TPU、DPU、微处理器、DSP、ASIC、FPGA,或这些处理器形式中至少两种的组合。
此外,以上装置中的各单元可以全部或部分可以集成在一起,或者可以独立实现。在一种实现中,这些单元集成在一起,以SoC的形式实现。该SoC中可以包括至少一个处理器,用于实现以上任一种方法或实现该装置各单元的功能,该至少一个处理器的种类可以不同,例如包括CPU和FPGA,CPU和人工智能处理器,CPU和GPU等。
本申请实施例还提供了一种装置,该装置包括处理单元和存储单元,其中存储单元用于存储指令,处理单元执行存储单元所存储的指令,以使该装置执行上述实施例执行的方法或者步骤。
可选地,若该装置位于运载工具中,上述处理单元可以是图1所示的处理器151-15n。
图17示出了本申请实施例提供的控制***1700的示意性框图。如图17所示,该控制***1700中包括至少两个发声装置和计算平台,其中,该计算平台可以包括上述控制装置1600。
可选地,该控制***1700中还包括一个或者多个传感器。
本申请实施例还提供了一种运载工具,该运载工具可以包括上述控制装置1600或者控制***1700。
可选地,该运载工具可以为车辆。
本申请实施例还提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述方法。
本申请实施例还提供了一种计算机可读介质,所述计算机可读介质存储有程序代码,当所述计算机 程序代码在计算机上运行时,使得计算机执行上述方法。
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者上电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应理解,本申请实施例中,该存储器可以包括只读存储器和随机存取存储器,并向处理器提供指令和数据。
还应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖。在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (28)

  1. 一种控制方法,其特征在于,包括:
    获取传感信息;
    根据所述传感信息,控制运载工具中至少两个不同位置的发声装置发出声像漂移的提示音,所述提示音用于提示所述运载工具的状态变化。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    根据所述传感信息,控制所述提示音的声像漂移方向。
  3. 根据权利要求2所述的方法,其特征在于,所述传感信息用于指示从第一档位调整至第二档位,所述根据所述传感信息,控制所述提示音的声像漂移方向,包括:
    控制所述提示音的声像漂移方向为第一方向,所述第一方向由所述第一档位指向所述第二档位。
  4. 根据权利要求2所述的方法,其特征在于,所述传感信息用于指示第一功能启动或者关闭,所述根据所述传感信息,控制所述提示音的声像漂移方向,包括:
    控制所述提示音的声像漂移方向为第二方向,所述第二方向为所述运载工具所在平面的法向。
  5. 根据权利要求2所述的方法,其特征在于,所述传感信息用于指示向第三方向转向,所述根据所述传感信息,控制所述提示音的声像漂移方向,包括:
    控制所述提示音的声像漂移方向为所述第三方向。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述方法还包括:
    控制氛围灯点亮的方向,所述氛围灯点亮的方向与所述声像漂移方向相对应,所述运载工具包括所述氛围灯。
  7. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    根据所述传感信息,控制所述提示音的声像漂移速度。
  8. 根据权利要求7所述的方法,其特征在于,所述传感信息用于指示加速踏板或者制动踏板的开度变化,所述根据所述传感信息,控制所述提示音的声像漂移速度,包括:
    根据所述加速踏板或者所述制动踏板的开度变化,控制所述提示音的声像漂移速度。
  9. 根据权利要求7所述的方法,其特征在于,所述传感信息用于指示用户在显示屏上的滑动输入,所述根据所述传感信息,控制所述提示音的声像漂移速度,包括:
    根据所述滑动输入的速度,控制所述提示音的声像漂移速度。
  10. 根据权利要求1至9中任一项所述的方法,其特征在于,所述方法还包括:
    根据用户所在的区域,确定所述至少两个发声装置。
  11. 根据权利要求1至10中任一项所述的方法,其特征在于,所述运载工具中包括用户输入与发声装置的映射关系,所述方法还包括:
    根据所述映射关系和所述传感信息,确定所述至少两个发声装置。
  12. 一种控制装置,其特征在于,包括:
    获取单元,用于获取传感信息;
    控制单元,用于根据所述传感信息,控制运载工具中至少两个不同位置的发声装置发出声像漂移的提示音,所述提示音用于提示所述运载工具的状态变化。
  13. 根据权利要求12所述的装置,其特征在于,所述控制单元,还用于:
    根据所述传感信息,控制所述提示音的声像漂移方向。
  14. 根据权利要求13所述的装置,其特征在于,所述传感信息用于指示从第一档位调整至第二档位,所述控制单元,用于:
    控制所述提示音的声像漂移方向为第一方向,所述第一方向由所述第一档位指向所述第二档位。
  15. 根据权利要求13所述的装置,其特征在于,所述传感信息用于指示第一功能启动或者关闭,所述控制单元,用于:
    控制所述提示音的声像漂移方向为第二方向,所述第二方向为所述运载工具所在平面的法向。
  16. 根据权利要求13所述的装置,其特征在于,所述传感信息用于指示向第三方向转向,所述控制单元,用于:
    控制所述提示音的声像漂移方向为所述第三方向。
  17. 根据权利要求12至16中任一项所述的装置,其特征在于,所述控制单元,还用于:
    控制氛围灯点亮的方向,所述氛围灯点亮的方向与所述声像漂移方向相对应,所述运载工具包括所述氛围灯。
  18. 根据权利要求12或13所述的装置,其特征在于,所述控制单元,还用于:
    根据所述传感信息,控制所述提示音的声像漂移速度。
  19. 根据权利要求18所述的装置,其特征在于,所述传感信息用于指示加速踏板或者制动踏板的开度变化,所述控制单元,用于:
    根据所述加速踏板或者所述制动踏板的开度变化,控制所述提示音的声像漂移速度。
  20. 根据权利要求18所述的装置,其特征在于,所述传感信息用于指示用户在显示屏上的滑动输入,所述控制单元,用于:
    根据所述滑动输入的速度,控制所述提示音的声像漂移速度。
  21. 根据权利要求12至20中任一项所述的装置,其特征在于,所述装置还包括:
    第一确定单元,用于根据用户所在的区域,确定所述至少两个发声装置。
  22. 根据权利要求12至21中任一项所述的装置,其特征在于,所述运载工具中包括用户输入与发声装置的映射关系,所述装置还包括:
    第二确定单元,用于根据所述映射关系和所述传感信息,确定所述至少两个发声装置。
  23. 一种控制装置,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述存储器中存储的计算机程序,以使得所述装置执行如权利要求1至11中任一项所述的方法。
  24. 一种控制***,其特征在于,包括至少两个发声装置和计算平台,其中,所述计算平台包括如权利要求12至23中任一项所述的装置。
  25. 一种运载工具,其特征在于,包括如权利要求12至23中任一项的控制装置,或者,包括如权利要求24所述的控制***。
  26. 根据权利要求25所述的运载工具,其特征在于,所述运载工具为车辆。
  27. 一种计算机可读存储介质,其特征在于,其上存储有计算机程序,所述计算机程序被计算机执行时,以使得实现如权利要求1至11中任一项所述的方法。
  28. 一种芯片,其特征在于,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,以执行如权利要求1至11中任一项所述的方法。
PCT/CN2023/126760 2022-10-28 2023-10-26 一种控制方法、装置和运载工具 WO2024088337A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211337615.4 2022-10-28
CN202211337615.4A CN117985035A (zh) 2022-10-28 2022-10-28 一种控制方法、装置和运载工具

Publications (1)

Publication Number Publication Date
WO2024088337A1 true WO2024088337A1 (zh) 2024-05-02

Family

ID=90830050

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/126760 WO2024088337A1 (zh) 2022-10-28 2023-10-26 一种控制方法、装置和运载工具

Country Status (2)

Country Link
CN (1) CN117985035A (zh)
WO (1) WO2024088337A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001289660A (ja) * 2000-04-05 2001-10-19 Denso Corp ナビゲーション装置
US20160016513A1 (en) * 2014-07-15 2016-01-21 Harman International Industries, Inc. Spatial sonification of accelerating objects
CN107444257A (zh) * 2017-07-24 2017-12-08 驭势科技(北京)有限公司 一种用于在车辆内呈现信息的方法与设备
CN207157062U (zh) * 2017-07-24 2018-03-30 驭势科技(北京)有限公司 一种车载呈现设备及车辆
CN109960764A (zh) * 2019-04-01 2019-07-02 星觅(上海)科技有限公司 行车信息提示方法、装置、电子设备和介质
CN113335074A (zh) * 2021-06-09 2021-09-03 神龙汽车有限公司 一种基于档杆位置的整车控制***及控制方法
CN113596705A (zh) * 2021-06-30 2021-11-02 华为技术有限公司 一种发声装置的控制方法、发声***以及车辆
CN115042711A (zh) * 2022-06-16 2022-09-13 安徽江淮汽车集团股份有限公司 一种行人提示音的控制方法、电子设备及计算机可读存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001289660A (ja) * 2000-04-05 2001-10-19 Denso Corp ナビゲーション装置
US20160016513A1 (en) * 2014-07-15 2016-01-21 Harman International Industries, Inc. Spatial sonification of accelerating objects
CN107444257A (zh) * 2017-07-24 2017-12-08 驭势科技(北京)有限公司 一种用于在车辆内呈现信息的方法与设备
CN207157062U (zh) * 2017-07-24 2018-03-30 驭势科技(北京)有限公司 一种车载呈现设备及车辆
CN109960764A (zh) * 2019-04-01 2019-07-02 星觅(上海)科技有限公司 行车信息提示方法、装置、电子设备和介质
CN113335074A (zh) * 2021-06-09 2021-09-03 神龙汽车有限公司 一种基于档杆位置的整车控制***及控制方法
CN113596705A (zh) * 2021-06-30 2021-11-02 华为技术有限公司 一种发声装置的控制方法、发声***以及车辆
CN115042711A (zh) * 2022-06-16 2022-09-13 安徽江淮汽车集团股份有限公司 一种行人提示音的控制方法、电子设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN117985035A (zh) 2024-05-07

Similar Documents

Publication Publication Date Title
JP6883766B2 (ja) 運転支援方法およびそれを利用した運転支援装置、運転制御装置、車両、運転支援プログラム
US10656639B2 (en) Driving support device, driving support system, and driving support method
US10435033B2 (en) Driving support device, driving support system, driving support method, and automatic drive vehicle
JP5910904B1 (ja) 運転支援装置、運転支援システム、運転支援方法、運転支援プログラム及び自動運転車両
US10759446B2 (en) Information processing system, information processing method, and program
JP6555599B2 (ja) 表示システム、表示方法、およびプログラム
CN112513787B (zh) 车内隔空手势的交互方法、电子装置及***
WO2016170764A1 (ja) 運転支援方法およびそれを利用した運転支援装置、運転制御装置、車両、運転支援プログラム
WO2016170763A1 (ja) 運転支援方法およびそれを利用した運転支援装置、自動運転制御装置、車両、運転支援プログラム
JP2018181269A (ja) 提示制御装置、自動運転制御装置、提示制御方法及び自動運転制御方法
US20190299855A1 (en) Vehicle proximity system using heads-up display augmented reality graphics elements
WO2016170773A1 (ja) 運転支援方法およびそれを利用した運転支援装置、自動運転制御装置、車両、運転支援プログラム
WO2024088337A1 (zh) 一种控制方法、装置和运载工具
JP2020095044A (ja) 表示制御装置及び表示制御方法
WO2023225811A1 (zh) 辅助驾驶的方法、装置和车辆
WO2022158230A1 (ja) 提示制御装置及び提示制御プログラム
JP7334768B2 (ja) 提示制御装置及び提示制御プログラム
WO2024038759A1 (ja) 情報処理装置、情報処理方法、及び、プログラム
WO2024043053A1 (ja) 情報処理装置、情報処理方法、及び、プログラム
WO2024082701A1 (zh) 换道方法、装置和智能驾驶设备
JP2018165693A (ja) 運転支援方法およびそれを利用した運転支援装置、自動運転制御装置、車両、プログラム、提示システム
JP6558738B2 (ja) 運転支援装置、運転支援システム、運転支援方法、運転支援プログラム及び自動運転車両
JP2024015571A (ja) 車両の運転支援システム
JP2022121370A (ja) 表示制御装置及び表示制御プログラム
JP2023048358A (ja) 運転支援装置及びコンピュータプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23881913

Country of ref document: EP

Kind code of ref document: A1