CN115534850A - Interface display method, electronic device, vehicle and computer program product - Google Patents

Interface display method, electronic device, vehicle and computer program product Download PDF

Info

Publication number
CN115534850A
CN115534850A CN202211498038.7A CN202211498038A CN115534850A CN 115534850 A CN115534850 A CN 115534850A CN 202211498038 A CN202211498038 A CN 202211498038A CN 115534850 A CN115534850 A CN 115534850A
Authority
CN
China
Prior art keywords
vehicle
identifier
voice interaction
information
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211498038.7A
Other languages
Chinese (zh)
Other versions
CN115534850B (en
Inventor
李青
王睿
张茜
周国歌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jidu Technology Co Ltd
Original Assignee
Beijing Jidu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jidu Technology Co Ltd filed Critical Beijing Jidu Technology Co Ltd
Priority to CN202211498038.7A priority Critical patent/CN115534850B/en
Publication of CN115534850A publication Critical patent/CN115534850A/en
Application granted granted Critical
Publication of CN115534850B publication Critical patent/CN115534850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/037Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for occupant comfort, e.g. for automatic adjustment of appliances according to personal settings, e.g. seats, mirrors, steering wheel
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/037Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for occupant comfort, e.g. for automatic adjustment of appliances according to personal settings, e.g. seats, mirrors, steering wheel
    • B60R16/0373Voice control
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

The embodiment of the application provides an interface display method, electronic equipment, a vehicle and a computer program product. The interface display method is suitable for a vehicle with display and voice interaction functions, and comprises the following steps: collecting first driving information of a vehicle; displaying second driving information and a first identifier of the voice interaction function on a vehicle according to the collected first driving information; the first identification is used for prompting that the voice interaction function is in a state to be activated, and the display state of the first identification is determined based on the second driving information. By adopting the technical scheme provided by the embodiment of the application, the user can be enabled to perceive that the voice interaction function of the vehicle is in the waiting activation state through the vision, and the display state of the identifier of the voice interaction function is determined based on the display state of the second driving information, so that the diversity and interest of the identifier display can be increased, and better visual experience can be brought to the user.

Description

Interface display method, electronic device, vehicle and computer program product
Technical Field
The present application relates to the field of vehicle technologies, and in particular, to an interface display method, an electronic device, a vehicle, and a computer program product.
Background
Along with the development of vehicle intellectualization, vehicles with voice interaction functions are increasingly popular with people. After the voice interaction function is awakened by the user, the vehicle can work according to a voice instruction sent by the user.
At present, some vehicles can display an identifier corresponding to a voice interaction function on a display screen only when the voice interaction function is awakened or in the voice interaction process, and a user can visually perceive that the voice interaction function is started. However, before the voice interactive function is woken up, the user cannot visually perceive that the voice interactive function is in a waiting-to-wake-up state. In addition, the display style of the identification corresponding to the existing vehicle voice interaction function is single and not rich enough.
Disclosure of Invention
The present application provides an interface display method, an electronic device, a vehicle and a computer program product that address the above-mentioned problems, or at least partially address the above-mentioned problems.
In one embodiment of the present application, an interface display method is provided that is suitable for a vehicle having display and voice interaction functions. The method comprises the following steps:
collecting first driving information of a vehicle;
displaying second driving information and a first identifier of the voice interaction function on a vehicle according to the collected first driving information;
the first identification is used for prompting that the voice interaction function is in a state to be activated, and the display state of the first identification is determined based on the second driving information.
In another embodiment of the present application, an electronic device is also provided. The electronic device includes: a memory and a processor, wherein the memory is configured to store one or more computer programs; the processor is coupled to the memory and configured to execute the one or more computer programs stored in the memory, so as to implement the steps in the interface display method provided by the embodiment of the present application.
The embodiment of the application further provides a vehicle, the vehicle comprises a vehicle body and the electronic equipment provided by the embodiment of the application, and the electronic equipment is arranged on the vehicle body.
In yet another embodiment of the present application, a computer program product is also provided. The computer program product comprises a computer program/instructions, which when executed by a processor, can implement the steps in the interface display method provided by the embodiments of the present application.
According to the technical scheme provided by the embodiment of the application, corresponding second driving information and a first identifier of the voice interaction function of the vehicle are displayed on the vehicle according to the collected first driving information of the vehicle, the first identifier is used for prompting that the voice interaction function is in a state to be activated, and the display state of the first identifier is determined based on the second driving information. Therefore, according to the scheme provided by the embodiment of the application, the user can perceive that the voice interaction function of the vehicle is in the waiting activation state through vision, the voice interaction function corresponds to the first identification of the waiting activation state, the display state based on the second driving information is displayed, the diversity and the interestingness of the display of the first identification can be increased, and better visual experience can be brought to the user.
Drawings
Fig. 1 is a schematic flowchart of an interface display method according to an embodiment of the present application;
FIG. 2a is a first exemplary display of a voice interaction function according to an embodiment of the present application;
FIG. 2b is a first example of a display voice interaction function provided in accordance with another embodiment of the present application;
FIG. 2c is a first example of a display voice interaction function provided in accordance with yet another embodiment of the present application;
FIG. 3a is an example of a display of a first identifier of a voice interaction function associated with a scene element, provided by an embodiment of the present application;
FIG. 3b is an exemplary illustration of a first identifier of a voice interaction function associated with multimedia information according to an embodiment of the present application;
fig. 3c is an example of a window element associated with a first identifier of a voice interaction function and corresponding to push information, which is provided in an embodiment of the present application;
FIG. 3d is a schematic diagram illustrating a first display principle of the voice interaction function according to the driving status of the vehicle according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a display hierarchy corresponding to content items displayed on a vehicle according to an embodiment of the present application;
FIGS. 5 a-5 d are schematic diagrams illustrating a second display principle of the voice interaction function provided in an embodiment of the present application;
FIGS. 6 a-6 e are schematic diagrams illustrating a second logo and a text box display of the voice interaction function according to an embodiment of the present application;
FIG. 7a is a schematic diagram illustrating a display area division of a vehicle display screen according to an embodiment of the present application;
FIG. 7b is a schematic diagram of a display area division of a vehicle display screen according to another embodiment of the present application;
FIGS. 8 a-8 c are schematic diagrams illustrating a second display principle when the voice interaction function supports a one-person speaking mode according to an embodiment of the present application;
fig. 9a to 9f are schematic diagrams illustrating a voice broadcast principle when the voice interaction function supports a two-person speaking mode according to an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating a voice interaction function exit provided by an embodiment of the present application;
fig. 11 is a schematic structural diagram of an interface display device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a computer program product according to an embodiment of the present application.
Detailed Description
The intelligent development of vehicles promotes the development trend that the human-computer interaction function on the vehicles is diversified and complicated. Among them, the common human-computer interaction functions include a digital touch screen, a voice interaction (VUI), a gesture interaction, and the like, and particularly, the voice interaction function is very popular among people. After the user sends out the awakening voice and awakens the voice interaction function on the vehicle, the user continues to send out the voice command, and the vehicle can work according to the voice command, such as playing music, starting navigation, searching information, making/receiving calls, opening/closing a skylight, starting/closing an air conditioner and the like. The voice interaction function on the vehicle liberates the control mode that the both hands of the user and the vehicle work are controlled more intelligently and comfortably, and the user (such as a driver) is more concentrated on driving and improves the driving safety.
When the vehicle monitors that the user wakes up the voice interaction function or the vehicle displays the identification corresponding to the voice interaction function on the vehicle in the voice interaction process with the user after the voice interaction function is woken up, so that the user can visually perceive that the voice interaction function is in a starting state. Before the voice interaction function is awakened, the identification corresponding to the voice interaction function cannot be displayed on the vehicle, and the user cannot visually perceive that the voice interaction function of the vehicle is in a waiting awakening state (i.e., a waiting activation state described below in this application). In some implementations, the display style of the identifier corresponding to the voice interaction function is unchanged during the whole voice interaction process. The user cannot visually perceive the current voice interaction stage, whether the current voice interaction stage is in a pitch sampling stage or whether the pitch sampling completes the recognition stage of the collected user voice and the like through the displayed identification corresponding to the voice interaction function, and the recognition stage is unclear. Therefore, the display style of the identification corresponding to the voice interaction function of the vehicle is single and not rich enough.
In view of the above mentioned problems, embodiments of the present application provide an interface display technical solution, so that a user can visually perceive that a voice interaction function of a vehicle is in a wait wakeup state, and a mark corresponding to the voice interaction function is displayed in a rich and various manner.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification, claims, and above-described figures of the present application, a number of operations are included that occur in a particular order, which operations may be performed out of order or in parallel as they occur herein. The sequence numbers of the operations, e.g., 101, 102, etc., are merely used to distinguish between the various operations, and the sequence numbers themselves do not represent any order of execution. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Fig. 1 shows a schematic flowchart of an interface display method provided in an embodiment of the present application, where the method is applied to a vehicle with display and voice interaction functions, where the vehicle may be, but is not limited to: pure electric vehicles, fuel vehicles and hybrid vehicles. In specific implementation, an execution main body of the interface display method provided in this embodiment may be a component/device having a logic calculation function on a Vehicle, for example, a Vehicle Control Unit (VCU), an Electronic Control Unit (ECU), and the like of the Vehicle. As shown in fig. 1, the interface display method includes the following steps:
101. collecting first driving information of a vehicle;
102. displaying second driving information and a first identifier of the voice interaction function on a vehicle according to the collected first driving information;
the first identification is used for prompting that the vehicle voice interaction function is in a state to be activated, and the display state of the first identification is determined based on the second driving information.
In one possible implementation scheme, a display screen is installed at a center console position on the vehicle. The display screen may extend from a position directly in front of or near to the main drive to a position directly in front of or near to the secondary drive. The second driving information and the first identification may be displayed on the display screen. Or, a projection device is installed on the vehicle, and the second driving information and the first identifier can be displayed on a certain medium of the vehicle through projection of the projection device. Or, a first display screen is installed at a center console of the vehicle, and a second display screen for a rear passenger to watch is also installed on the vehicle; second driving information and a first mark are displayed on the first display screen, and the first mark on the second display screen is displayed along with the first mark on the first display screen. Still alternatively, a head-up display device is mounted on the vehicle, and the first driving information and the first identifier may be displayed on a front windshield of the vehicle through the head-up display device. Still alternatively, the vehicle is equipped with a wearable device (such as glasses) on which the second driving information and the first identifier are displayed; and the like, which are not limited in the embodiments of the present application.
After the vehicle is powered on, a control device (such as a VCU) of the vehicle can acquire driving information of the vehicle in real time through a sensor, an interactive device and the like on the vehicle. The driving information may include, but is not limited to: the system comprises a gear, external environment parameters (such as a forward distance, road information and the like), driving states (such as forward driving, reverse driving and the like), a current driving mode (a manual driving mode, an auxiliary driving mode or an automatic driving mode and the like) of the vehicle, driving speed, accelerator size, a brake pedal state, interaction information (such as control of air conditioner temperature, seat temperature, music playing, multimedia application calling and the like) of a driver and a passenger in the vehicle through interaction equipment, an in-vehicle image collected by an on-vehicle camera and the like. The control device of the vehicle can control the vehicle to display corresponding contents based on the driving information, so that a driver in the vehicle can visually know the real situation of the vehicle, and the driving safety of the vehicle is improved. For convenience of description, the present embodiment refers to the actual driving information of the vehicle acquired as described above as first driving information, and the content displayed on the vehicle based on the first driving information as second driving information.
In a specific implementation, the first driving information includes, but is not limited to: driving state data, driving environment data, user interaction data with the vehicle, in-vehicle data, and the like. The driving state data includes, but is not limited to, power-on information, driving speed information, driving pose information (including driving position and driving posture of the vehicle), pedal information (including accelerator pedal information and brake pedal information), and gear information (including which gear the vehicle is currently located, such as a forward gear (D gear), a reverse gear (R gear), a parking gear (P gear), and the like). The driving environment data includes, but is not limited to, environment information around the vehicle collected by an image collecting device (such as a camera), a laser radar, a distance sensor, etc. disposed outside the vehicle, and the environment information may include, but is not limited to, at least one of the following: road information, pedestrians, vehicles, plants, animals, etc. The interactive data of the user and the vehicle includes, but is not limited to, data actively input by the user, such as a destination, query information, and an instruction (e.g., an instruction for invoking an audio/video application, an instruction for heating a seat, an instruction for regulating and controlling an air conditioner temperature, a wind speed, an air outlet angle, an instruction for invoking a broadcast frequency band, and the like) input by touching with a hand or by voice. In-vehicle data may include, but is not limited to: the camera in the vehicle acquires images in the vehicle, facial expressions of a driver and the like; the in-vehicle microphone collects voiceprint information and the like. Here, it should be noted that: information about the privacy of the user needs to be collected through user authorization. In this embodiment, the devices for collecting data relating to the occupant of the vehicle are all authorized by the occupant to operate.
Hereinafter, the present embodiment will specifically describe the technical solution provided by the present embodiment by taking various information contents such as displaying the second driving information and the identifier of the voice interaction function on the vehicle display screen as an example.
The second driving information displayed on the vehicle display screen may include, but is not limited to: simulated images, navigation data (e.g., navigation maps), etc. The simulation image is an image capable of reflecting the driving environment and/or the driving state of the vehicle, and is matched with the driving mode of the vehicle, for example, if the driving mode of the vehicle is a forward driving mode (i.e., the vehicle is driving forward), the simulation image displayed in real time on the display screen of the vehicle is a forward driving image (e.g., the simulation image 1 shown in fig. 2b or the simulation image 1' shown in fig. 3 a); for another example, if the driving mode of the vehicle is the parking mode, the simulation image displayed on the vehicle display screen in real time is a parking image, and the like. Specifically, as shown in fig. 2b, one frame of the simulation image is displayed on the vehicle display screen a, and the simulation image includes, but is not limited to, at least one of the following: the vehicle model 12 and the background 11 reflecting the driving environment of the vehicle. The model 12 is a vehicle model which is proportionally reduced according to the actual shape of the vehicle, and can be prestored in a storage medium of the vehicle for calling and displaying when needed, and the display state of the model is determined according to the driving state data in the first driving information; the background reflecting the driving environment of the vehicle is generated and displayed according to the driving environment data in the first driving information. The navigation data may be, but is not limited to, a navigation map (e.g., the navigation map 4 shown in fig. 3 a) displayed according to instructions such as a destination address or a query such as a charging pile, a parking space, etc. input by a user.
Next, a description will be given of a display principle of an analog image by taking as an example that first driving information of a vehicle at a time t is displayed on a display screen of the vehicle, and the analog image is displayed in a frame corresponding to the time t.
Assuming that the vehicle is in a manual forward driving mode (i.e. the driver manually drives the vehicle forward), the driver generally focuses more on the surroundings of the vehicle. Accordingly, the driving environment data included in the first driving information may be captured by cameras (e.g., wide-angle fisheye cameras) disposed around the vehicle. After receiving the environmental image around the vehicle at time t sent from the cameras in each direction, the image processing device (e.g., a processor) on the vehicle may perform processing such as distortion correction and edge blending on the multiple environmental images to obtain the vehicle environmental image. It can be understood that the obtained vehicle environment image (or referred to as a background reflecting the driving environment of the vehicle) can represent the vehicle surrounding environment information collected by each camera of the vehicle. Then, the vehicle state can be determined according to the driving state data contained in the first driving information of the vehicle at the moment t, and the corresponding vehicle model of the vehicle can be updated according to the vehicle state. For example, the state of each wheel of the vehicle may be determined based on the driving state data, and the display state of the wheel in the model may be updated based on the state of each wheel (e.g., change of wheel steering). In addition, the vehicle running direction can be determined according to the running state data contained in the driving information of the vehicle at the moment t; for example, the vehicle running direction at the time t can be determined according to the inclination angle of the vehicle at the time t relative to the horizontal plane fed back by the three-axis sensor in the running state data; and finally, rendering the obtained background and the model reflecting the vehicle driving environment according to the vehicle driving direction to obtain a three-dimensional panoramic image containing the model corresponding to the vehicle driving direction. Further, the rendered three-dimensional panoramic image including the vehicle model is displayed through a display screen on the vehicle, so that a simulated image (such as the simulated image 1 shown in fig. 2 b) corresponding to the vehicle at the time t is displayed on the display screen of the vehicle.
It should be added that, in the rendered three-dimensional panoramic image including the vehicle model, a distance identifier (such as the distance identifier 5 shown in fig. 2 b) for identifying a safe distance of the vehicle body, an estimated distance between the vehicle model and each obstacle in the three-dimensional panoramic image, and the like can also be identified, and the method is not limited herein. For the principle of displaying a corresponding analog image on the vehicle display screen when the vehicle is in other driving modes, such as parking, see the above description for examples.
The vehicle models referred to in the upper and lower parts of this embodiment refer to the vehicle model images, which are only expressed in different ways in different scenes.
It is considered that the user cannot visually perceive that the voice interactive function is in the wait-to-wake state (or to-be-activated state) before the voice interactive function of the existing vehicle is woken up. In view of this problem, in this embodiment, after the vehicle is started, in addition to displaying the corresponding second driving information on the vehicle display screen according to the first driving information of the vehicle acquired in real time, a first identifier corresponding to the voice interaction function of the vehicle may also be displayed. The first mark has a visual prompting function and can be used for prompting that the voice interaction function is in a state to be activated. In other words, the first identifier may be understood as a character to be activated corresponding to the voice interaction function. In particular, the first identifier corresponding to the displayed voice interaction function may be, but is not limited to, one or more combinations of a drawing element, a text element, and a line element. Optionally, in this embodiment, the first identifier (i.e., identifiers related to other embodiments below) of the voice interaction function is selected as a primitive, and the form of the primitive may be two-dimensional or three-dimensional. For example, the primitive elements may be polygonal icons, virtual characters, virtual animals, virtual plants, etc., in two or three dimensional form. Examples of the first identifier 21 corresponding to the voice interaction function being a square icon are shown in fig. 2b to 3 d.
And when the first identification corresponding to the voice interaction function is displayed, the display state of the first identification is determined according to the displayed second driving information.
In one possible implementation solution, the "determining the display state of the first indicator based on the second driving information" may include:
103. target information in the second driving information is determined.
104. And displaying the first identification in association with the target information.
In some cases, the vehicle will display the boot animation when it is just powered on, and then stay on the initial screen. In this case, when neither the driver nor the passenger is operating, the initial screen is the second driving information displayed on the vehicle. At this time, a page element which is relatively concerned by the driver in the initial picture can be used as target information, or a control and the like which are to be pushed to the driver or the passenger in the initial picture can be used as the target information; and so on.
Of course, there may be no target information on the initial screen. Correspondingly, the method provided by the embodiment of the application can further comprise the following steps:
if the second driving information does not include the target information, a first mark may be displayed on the vehicle, the first mark may be displayed at a fixed position, and the display mode may be a semi-transparent display.
For example, as shown in the example of FIG. 2a, a first indicator is displayed at a central location on the vehicle. Wherein the first mark may be a 'feather' like translucent cue figure.
After the vehicle is powered on and switched from the P range to the D range or the R range, the information of the first driving information of the vehicle is more, and the second driving information displayed based on the first driving information is richer, and the second driving information may include, but is not limited to, one or more of the following: the system comprises vehicle running state information, parking state information, an application window, a function control floating window, multimedia information, a function control and the like. At this time, one piece of the second driving information can be determined as the target information, and the first mark is displayed in association with the target information.
In a specific embodiment, the "determining the target information in the second driving information" may include the steps of:
s11, acquiring each information item in the second driving information and a display level corresponding to each information item; wherein, the content item of the upper display level shields the content item of the lower display level;
s12, determining the target information based on the content items at the top display level.
Fig. 4 shows an example of a presentation hierarchy, as shown in fig. 4, from bottom to top, each corresponding hierarchy is: initial frames (or called startup frames and vehicle power-on frames), windows corresponding to pushed information, application windows, information streams, dock columns (which are a functional interface for starting and switching running applications in a graphical user interface) and top columns, activated images and text boxes corresponding to voice functions, a Quality Control (QC) panel (which is a Control panel, such as various attributes of a display screen of a vehicle), and the like.
Correspondingly, when the step S11 is specifically implemented, all the information items in the second driving information displayed in the current vehicle display interface and the display levels corresponding to the information items may be obtained. In the above step S12, the content item at the top presentation level is generally regarded as an information item focused on by the driver or the passenger. Thus, the target information may be determined based on the content item at the top presentation level. For example, after the vehicle is powered on, the driver switches the P range to the D range, and at this time, a local area of the initial screen can display the vehicle model image, and the initial screen is at the uppermost layer. At this time, the car model image may be a target object. After the application window located at the upper layer of the initial screen in fig. 4 is called due to the driving behavior of the driver or the operation of the passenger, the application window can be used as the target information; and the like, and the embodiments of the present application are not necessarily exemplified. Hereinafter, the case where there is no target information and the case where a plurality of kinds of target information correspond to each other will be exemplified.
1. Non-target information
The second driving information comprises display content corresponding to the fact that the vehicle is in a starting state or a parking state. The starting state of the vehicle refers to a state that the vehicle is successfully started after being electrified but is not driven; the parking state may be a state where the vehicle speed is zero, such as a stop at a red light while the vehicle is running. Accordingly, it may be determined that the second driving information does not include the target information, and at this time, the "displaying the first identifier of the voice interaction function" in the foregoing step 102 may specifically include:
1021. displaying the first identifier in a first display style; wherein the first display mode is used for prompting the vehicle to start or park.
The details of the above 1021 will be described below with reference to several examples.
Example 11, the second driving information includes a content (or initial screen) of a power-on interface when the vehicle is in a start-up state
For example, the first driving information includes vehicle power-on information, a vehicle gear is in a P gear, an opening degree of an accelerator pedal is zero, an engine is not started, and the like, and at this time, the second driving information displayed on the vehicle display screen according to the first driving information may be vehicle power-on and power-on interface content. That is, the display state of the second driving information reflects that the vehicle has just been powered on and started successfully but has not yet traveled, and at this time, it is determined that the display state of the first indicator is the second display state, where the first display state includes: the first display style of the first mark is a first style, is displayed statically and is displayed at the target position. For example, fig. 2a shows a first feather-like indicator 21 displayed at a set target position on the vehicle display screen a.
Example 12, the second driving information includes interface content when the vehicle is in a parked state
For example, the first driving information includes shift position information such as a shift from D-range to P-range and zero vehicle speed, and the second driving information displayed on the vehicle based on the first driving information reflects a screen on which the vehicle is switched from a traveling state to a parking state. In this case, the display state of the first identifier is determined to be a second display state, and the second display state includes: the first display style of the first mark is a second style, is displayed statically and is displayed at the target position. For a detailed description of the second pattern, reference may be made to the related contents below.
2. Determining target information
2.1, the target information is the driving state information of the vehicle
The step 104 "displaying the first identifier in association with the target information" may specifically be: displaying the first identification which dynamically changes based on the driving state information of the vehicle.
The vehicle driving state information may be meter information (such as vehicle speed and driving direction), or an image (such as the above-mentioned simulated image) simulating the driving of the vehicle. Specifically, "displaying the first identifier dynamically changing based on the driving state information of the vehicle" may include:
s21, determining a moving direction and a moving speed according to the driving state information of the vehicle;
and S22, displaying the first mark which dynamically changes along the moving direction based on the moving speed.
For example, as shown in fig. 2b, when the vehicle is currently in the automatic driving mode, the simulation image shows a driving picture from a top view. At this time, the movable direction is the direction of the model 12 in the simulated image under the image-corresponding coordinate system, and the moving speed of the model 12 is determined based on the actual speed of the vehicle, which can reflect the actual speed of the vehicle. Accordingly, when displaying the first mark, the first mark dynamically changing along the moving direction can be displayed, and the changing speed is related to the moving speed. For example, the change speed of the first marker may be equal to the moving speed, or the ratio of the change speed of the first marker to the moving speed is fixed. Fig. 2b shows a display state of the dynamically changing first indicator corresponding to one frame of picture, specifically, as shown in the figure, a plurality of first indicators 21 are sequentially arranged along the moving direction; the plurality of first marks 21 are different in size and/or the plurality of first marks 21 are different in transparency, so that the visual effect that the first marks follow the vehicle model to run is presented. In specific implementation, along the moving direction, the sizes of the first marks 21 become smaller or larger gradually; and/or the transparency of the plurality of first marks 21 gradually becomes smaller or larger along the moving direction.
The example shown in fig. 2c is similar, the perspective of the simulated image being an over-the-shoulder perspective image of the driver. As shown in fig. 2c, the moving direction of the dynamically changing first indicator 21 may be parallel to the moving direction displayed in the simulation image, or may be substantially parallel to the moving direction, as long as the dynamically changing first indicator 21 visually reflects the vehicle traveling direction and speed.
Further, at least some of the plurality of first marks 21 fluctuate within the respective corresponding ranges. For example, the plurality of first markers 21 exhibit a visual effect of swimming along the moving direction.
In a more specific embodiment, the driving state information of the vehicle includes an image simulating driving of the vehicle. Accordingly, the step S22 "displaying the first indicator dynamically changing along the moving direction based on the moving speed" may include:
s221, determining the display position of the first identifier based on the image;
s222, displaying the first mark which is dynamically changed along the moving direction and according to the moving speed at the display position of the first mark.
Further, the image comprises a background reflecting the driving environment of the vehicle and a vehicle model simulating the vehicle. Accordingly, the step S221 "determining the display position of the first identifier based on the image" may include:
s2211, if no target environment element exists in the background in the image, determining the display position of the first identifier based on the vehicle model.
For example, based on the display position of the vehicle model, the display position of the first identifier is determined according to a preset orientation determination rule. The preset orientation determination rule may specify a distance from the display position of the vehicle model and an included angle between a connection line with the display position of the vehicle model and a reference axis (for example, an X axis or a Y axis in a display interface coordinate system).
S2212, if a target environment element exists in the background in the image, determining a display position of the first identifier based on the target environment element;
the target environment elements are lanes, road signs, idle parking spaces, idle charging potentials or navigation destinations in the vehicle driving environment.
One implementation manner of the above-mentioned "determining the display position of the first identifier based on the target environment element" in S2212 is: the display position of the first indicator is the vicinity of the target environment element, for example, the position of a certain point on the outline of the target environment element.
Another implementation is to determine the location of the first identification based on a distance between the target environment element and the mannequin. Specifically, if "determining the display position of the first identifier based on the target environment element" includes the following steps:
determining the distance between the vehicle model and the target environment element according to navigation data; determining a display position of the first marker based on the distance.
For example, when the distance is smaller than a preset distance, the display position of the first identifier is near the target environment element;
and when the distance is greater than or equal to the preset distance, the display position of the first identifier is far away from the target environment element.
In a specific implementation, the distance may be a vector, i.e. it contains direction information in addition to the distance value. If the vehicle model is closer to the target environment element, the distance takes a positive value; and when the vehicle model drives to the target environment element more and more, the distance is a negative value. Correspondingly, the preset distance may be zero, and if the positive value is greater than zero, the display position of the first identifier is near the target environment element; and if the negative value is less than zero, the display position of the first identifier is far away from the target environment element. For example, a distance greater than a set threshold from the target environment element in the display interface may be considered to be far away from the target environment element. The set threshold may be determined based on the specific size of the display interface, and is not limited herein.
As shown in fig. 3a, a first indicator is displayed near a target environment element (e.g., environment mark element 32) corresponding to a parking space.
2.2 when the target information is the parking state information of the vehicle
The step 104 "displaying the first identifier in association with the target information" may specifically be: displaying, on the vehicle, the first identification that guides the vehicle to park.
For example, a reverse image and a parking guide line are displayed on the vehicle. At this time, a first identifier for guiding the driver to turn the steering wheel may be displayed in the reverse image, for example, the first identifier is dynamically displayed as various guiding prompt messages based on the current vehicle posture, such as an arrow for steering the steering wheel, an arrow for returning the steering wheel to the reverse direction, and the like.
2.3, the target information is the window information
Wherein, the window information may be: the application window and the push information correspond to a popup window, card information, and the like, which is not limited in this embodiment.
Correspondingly, the step 104 of "displaying the first identifier in association with the target information" may specifically be: displaying the first identifier with an effect of highlighting the window information.
For example, as shown in fig. 3c, a first mark 21 is displayed at a corner of the outline edge of the window information, and a halo visually adapted to the first mark may also be displayed outside the outline of the window information to highlight the window information.
2.4 the target information is multimedia information
Correspondingly, the step 104 of "displaying the first identifier in association with the target information" may specifically be: and displaying the first identification interacted with the multimedia information.
For example, as shown in FIG. 3b, the current interface displays lyrics in the audio playback application window that dynamically scroll along with the audio. The first identification may be displayed following the lyrics of the person scrolling. I.e. highlighting the lyrics corresponding to the currently played audio.
Further, some elements displayed in association with the first identifier can be added according to the driving state of the vehicle (such as manual driving, automatic driving or auxiliary driving) so as to characterize the corresponding driving state of the vehicle through the added elements. Based on this, the method provided in the embodiment of the present application may further include:
105. if the vehicle is in a manual driving mode, adding a first element which is displayed in an associated mode to the first identifier; the first element is used for representing that the vehicle is in a manual driving state;
106. if the vehicle is in an auxiliary driving mode, adding a second element which is displayed in an associated mode to the first identifier; the second element is used for representing that the vehicle is in an auxiliary driving state;
107. if the vehicle is in an automatic driving mode, adding a third element which is displayed in an associated mode to the first identifier; the third element is used for representing that the vehicle is in an automatic driving state.
The following describes embodiments of the present application with reference to specific scenarios.
In the scene 1, the first driving information includes that the gear of the vehicle is in a D gear or an R gear, the accelerator pedal is in a non-zero opening degree, the vehicle speed is non-zero, and the like, at this time, the second driving information displayed according to the first driving information includes a simulation image, and the vehicle model in the simulation image is in a driving state (such as forward driving, reverse driving or parking), that is, the display state of the second driving information reflects that the vehicle is in a driving state (such as forward driving, reverse driving or parking). In this case, it may be determined that the display state of the first indicator is a second display state, the second display state including: the display mode of the first mark is a second mode (a plurality of first marks are sequentially arranged and displayed), dynamic display is carried out, and the display position is related to the position of the car model in the simulation image. The second pattern may be, but is not limited to, a pattern displayed by arranging the plurality of first marks in sequence, and the dynamic display is, but is not limited to, displayed in an effect of following the driving of the model in the simulated image. When the first mark is displayed in the second display state, the visual effect that the first mark runs along with the vehicle model in the simulation image can be presented. More specifically, if the vehicle is determined to be in the parking state according to the first driving information, the first identifier and the displayed parking route can be displayed in a correlated manner, and a certain spatial position relation is kept between the first identifier and the vehicle model, so that the displayed first identifier has an effect of guiding the vehicle to park; if the vehicle is determined to be in a forward driving state or a reverse driving state according to the first driving information, the first mark 21 can be displayed as the first mark 21 shown in fig. 2b or fig. 2c, so that the effect that the first mark 21 follows the model is presented.
Fig. 2b shows an example of the first indicator 21 displaying the voice interaction function in the second mode when the display state of the second driving information reflects that the vehicle is driving normally forward, and the display state of the first indicator 21 is shown. In fig. 2b, the plurality of first marks 21 with gradually increasing size and gradually increasing transparency are sequentially arranged along the driving direction of the vehicle model 12 in the simulation image 1 to present the visual effect that the first marks follow the vehicle model to drive, and the plurality of first marks 21 are gradually increased in the driving manner of the vehicle model to reflect the forward driving state of the vehicle.
Note that, when the second driving information includes the vehicle parking state information, the display state of the first indicator may refer to the display state of the first indicator shown in fig. 2 b.
For example, the second driving information includes vehicle reversing information, the first identifier 21 corresponding to the voice interaction function is displayed in a second mode, and the display state of the first identifier 21 is illustrated. Along the reversing driving direction of the vehicle model in the simulation image (not shown in the figure), a plurality of first marks 21 which are sequentially arranged and have gradually reduced sizes and gradually reduced transparencies are presented so as to present the visual effect that the first marks follow the parking and reversing driving of the vehicle model, and the purpose of reflecting that the vehicle is in the reversing driving state is to make the plurality of first marks 21 gradually reduce and the transparencies gradually increase along the driving mode of the vehicle model.
In the example given above, the plurality of first signs arranged in sequence are presented on the vehicle display screen, and the plurality of first signs may be arranged in sequence according to the display positions of the first signs along the driving direction of the vehicle model in the simulated image on the basis of the determined display positions of the first signs in the plurality of first signs. Specifically, the display position of the first identifier arranged at the head of the plurality of first identifiers can be determined based on the position of the vehicle model in the simulated image, and then the display positions of other first identifiers can be determined based on the display position of the first identifier arranged at the head, so that when the first identifier running along with the vehicle model in the simulated image is displayed, the first identifier and the vehicle model can be kept in a certain spatial position relation, and poor visual experience caused by random display is avoided. In specific implementation, the display position of the first identifier ranked at the first position in the plurality of first identifiers can be determined according to the position of the vehicle model in the simulation image and the preset position relationship between the identifier and the vehicle model. The position relationship of the preset identifier relative to the vehicle model can be flexibly set, and the embodiment is not limited. For example, the display position of the first identifier arranged at the first position may be determined in a display interface coordinate system of the vehicle display screen at a preset distance from the vehicle model in the simulation image to be located at a set position of the vehicle model. Further, the display position of the first mark arranged at the second position can be determined according to the determined display position of the first mark arranged at the first position and the size of the first mark arranged at the second position in the plurality of first marks; and analogizing in turn, determining the display positions of the rest first marks except the first mark arranged at the first position and the first mark arranged at the second position in the plurality of marks.
It should be added here that, in the plurality of first marks, display positions of two adjacent first marks may or may not overlap, and this embodiment is not limited to this. In the example shown in fig. 2b or fig. 2c, there is an overlap in the display positions of adjacent two of the plurality of first markers 21.
It should be added here that, in order to reflect whether the driving mode (such as parking mode or forward driving mode) in which the vehicle is in is manual driving, automatic driving mode or assisted driving through the first indicator, an element reflecting the manual driving mode or the automatic driving mode or the assisted driving accordingly may be added inside the first indicator. For example, if the display state of the second driving information reflects that the vehicle is traveling in the normal forward direction and is driven manually, a manual driving icon 210 (e.g., a hand-held steering wheel icon) may be added to the first indicator 21 'with the largest size among the plurality of first indicators 21 shown in fig. 2b to reflect that the vehicle is driven manually, that is, the display state of the first indicator 21 may be as shown in the display state shown in the block 2' in fig. 2 b. For another example, if the display state of the second driving information reflects that the vehicle is parked and is automatically parked, an automatic driving icon 211 (e.g., a single steering wheel icon) may be added to the first identifier with the largest size in the plurality of first identifiers shown in fig. 2c to reflect that the vehicle is automatically driven to park, that is, in this case, the display state of the first identifier 21 may be as shown in the display state shown in the block diagram 2 ″ in fig. 2 c. For another example, if the vehicle is driving in the forward direction and driving is assisted, characters such as "driving has been assisted" are added to the first mark with the largest size among the plurality of first marks 21, and the like, which are not limited herein.
In addition, in order to achieve a more realistic following effect, at least some of the first identifiers shown in fig. 2b or fig. 2c may also be made to fluctuate by a small amount within the respective corresponding range.
Besides the first identification corresponding to the car model display voice interaction function in the following simulation image, the first identification can be displayed in other modes. For example, when it is recognized that all display information displayed on the vehicle display screen contains a focus element meeting the visual focus requirement, the display mode of the first identifier may be switched to make the first identifier displayed in association with one focus element, so as to visually prompt the user of the position of the focus element, thereby facilitating the user to quickly screen the focus element of interest. Wherein, the visual focus requirement is a preset requirement, which may include but is not limited to at least one of the following: the focus elements are multimedia elements, such as window elements corresponding to push information (such as weather, hot news information, advertisements, entertainment bagua and the like), and interface elements (such as song names and lyrics) in an application interface; the focus element is a symbolic environment element in the simulation image, such as an environment mark element corresponding to a parking space, an environment mark element corresponding to a gas station, an environment mark element corresponding to a service area, an environment mark element corresponding to a lane (e.g., an environment mark element corresponding to a roadway, an environment mark element corresponding to a driving direction), and the like. In specific implementation, when it is recognized that the displayed second driving information contains a focus element meeting the visual focus requirement, the number of the focus elements meeting the visual focus requirement may be one or more, and if the number of the focus elements is multiple, one focus element may be determined from the multiple focus elements as a target focus element to be displayed in association with the first identifier, and the manner of the associated display is adapted to the type to which the target focus element belongs. The focus element is defined from the perspective of analyzing the driver and/or passenger, such as locking the visual focus on the display interface according to the preferences of the driver and/or passenger, etc. The above-mentioned target information is described from an information level. In fact, the focus element and the target information described herein may have the same meaning. That is, in the above-described embodiment, the target information is determined based on the content item at the top presentation level in the second driving information. The target information may be preset by a program, for example, a window in a content item of a top presentation level is the target information; as another example, there is a simulated image in the content item at the top presentation level, the simulated image is the target information, and so on. In practice, the target information may be determined by analyzing the preferences of the driver and/or the passenger, and the like. That is, the determination of the target information may be determined by the same technical means as the focus elements in this section.
Specifically, if the displayed second driving information includes a plurality of focus elements meeting the visual focus requirement, one focus element may be determined as the target focus element from the plurality of focus elements by any one of the following methods:
selecting a focus element from a plurality of focus elements in a random mode, and determining the focus element as a target focus element; as seen in fig. 3c, a focus element 31 may be randomly selected from the plurality of focus elements 3, determined as the target focus element.
Predicting the intention of a driver in the vehicle according to the acquired data information, and determining one focus element matched with the intention of the driver as a target element; the acquired data information includes, but is not limited to, at least one of the following: the vehicle data (for example, vehicle positioning information, information collected by a plurality of sensors on the vehicle (for example, environment information around the vehicle collected by a radar, a distance sensor, and the like, images outside the vehicle collected by a camera, weather identified according to the collected images outside the vehicle (for example, rainy days, snowy days, sunny days), driving speed, remaining electric quantity, remaining oil quantity, and the like), interaction data between a driver and the vehicle in the vehicle (for example, information actively input by a user (for example, music preference, driving preference, destination, and the like), information generated by interaction between the driver and the vehicle (for example, navigation data generated according to the destination input by the driver), facial images of the driver, body movement images, and the like) need to be described.
The following describes specific implementations of a target focus element determination and presentation of the first identifier in association with the target focus element, in conjunction with several examples.
Example 41
Referring to fig. 3a, the second driving information displayed on the vehicle display screen reflects that the vehicle is in a forward driving state, and according to the navigation data corresponding to the navigation map 4 in the displayed second driving information, it is determined that the distance between the vehicle and the destination is 400m, and at this time, the intention of driving passengers in the vehicle can be predicted to be parking, accordingly, according to the intention, the simulation image in the displayed second driving information can be recognized, and when it is recognized that the environment image included in the simulation image has the environment mark element 32 corresponding to the parking space adapted to the destination (the distance between the environment mark element 32 and the vehicle model is smaller than the first preset distance), the display state of the first mark is switched from the first display state (for example, the display state of the first mark 21 shown in fig. 2 b) to the third display state, so that the first mark 21 is displayed near the environment mark element 32 corresponding to the parking space, so that the driver can prompt the position of the parking space. Specifically, for example, a plurality of sequentially arranged first markers 21 may be displayed in the region 2' near the environment mark element 32, and the display pattern of the sequentially arranged first markers 21 is referred to as the second pattern described above. In addition, in order to further prompt the driver of the position of the parking space, the environment mark element 32 corresponding to the parking space can be highlighted. The manner of highlighting the environment mark element 32 is not limited in this embodiment, as long as the driver can be visually prompted about the position of the parking space corresponding to the environment mark element 32 after the environment mark element 32 is highlighted.
What needs to be added here is: when the first indicator 21 is displayed in the third display state in the vicinity of the environment marker element 32, the first indicator 21 may be displayed in a static manner or may be displayed in a dynamic manner. For example, the first indicator 21 may be displayed in a dynamic manner of respiratory movement effect, or the first indicator 21 may be displayed in a dynamic manner of circling motion at a position above the environment mark element 32 corresponding to the parking space, which is not limited herein.
Further, when it is detected that the vehicle starts parking into a parking space corresponding to the environment mark element 32, switching may be performed again on the display state of the first indicator, specifically, the display state of the first indicator may be switched from the third state to the display state of the first indicator 21 (the first display state) as shown in fig. 2c, so that it is reflected by the first indicator 21 that the vehicle is backing up to park into a parking space. Or when it is detected that the vehicle has reached the parking space corresponding to the environment mark element 32, the vehicle is not parked, but is directly driven away, and when it is determined that the distance from the parking space corresponding to the environment mark element 32 is, for example, 500m (at this time, the distance between the vehicle model and the environment editing element 32 is greater than or equal to the second preset distance), switching may be performed again on the display state of the first identifier. In particular, the display state of the first indicator may be switched from the third display state to the fourth display state such that the first indicator is displayed away from the environmental indicia element 32. The fourth display state may be the same as or different from the first display state (e.g., the display state of the first indicator 21 shown in fig. 2 b), and is not limited thereto, as long as it is ensured that the first indicator 21 is displayed away from the environment mark element 32.
Example 42: assuming that all the display contents displayed on the vehicle display screen include multimedia information (such as a song menu, etc.), and accordingly, the determined target focus element is a media mark element in the multimedia information, the first mark element 21 may be displayed in a fifth display state, so as to present an effect of interactive display with the multimedia information. In particular, the amount of the solvent to be used,
example 421, assuming that the target focus element is multimedia information (which is the content of the song being played) shown in fig. 3b, it is determined that the driver intends to sing possibly following the currently played song according to the humming habit of the driver, and at this time, the media mark element 33 of the lyric corresponding to the progress of the currently played song can be determined as the target focus element; accordingly, the first marker element 21 may be displayed at an angular position of the media marker element 33, such as statically or dynamically, such as in a breath-action effect. Alternatively, it may also be displayed in a dynamic manner, such as in a winding circle, around the media mark element 33, etc.
Example 422, assuming that the determined target focus element is a video playing interface of a video application displayed on the vehicle display screen, the first identifier 21 may be displayed around the video playing interface in a dynamic manner such as winding, or whether comment content (in a form of text or graphic elements, etc.) appears in the video content played in the video playing interface may also be identified; if the comment content appears, the effect that the first identification moves along with the comment content can be displayed on the video playing interface based on the moving direction of the comment content.
Example 43, assuming that push information is contained in all display contents displayed on the vehicle display screen, see the window elements 30 corresponding to the plurality of push information shown in fig. 3 c; according to historical data related to the driver and the passenger, the interest degree of the driver and the passenger in the pushed information corresponding to the window elements 30 can be determined firstly; then, one window element is selected from the plurality of window elements 30 as a target focus element based on the interest degree corresponding to each window element. In specific implementation, the plurality of window elements may be sorted based on the interest degree corresponding to each window element, and a first sorted window element in the plurality of window elements is used as the target focus element. For example, if the target focus element is the window element 31, the first identifier 21 may be displayed at, but not limited to, the upper left corner of the window element 31, and in addition, in this example, in addition to displaying the first identifier 21 at the upper left corner of the window element 31, the window element 31 may be highlighted, where the highlighting may be, but is not limited to: a shadow effect/a lighting effect or the like is added to the window element 31, for example, a lighting effect of a conspicuous color (e.g., purple) may be added to the peripheral edge of the window element 31.
Further, if it is monitored that the focus element contained in the displayed second driving information disappears and the second driving information reflects that the vehicle is in a driving state, the first mark can be restored to follow the vehicle model contained in the simulated image in the second driving information for displaying.
It should be noted here that: to ensure driving safety, the precondition that the second driving information displayed on the vehicle display screen in the above example includes multimedia information (e.g., a window element corresponding to the push information) may be that it is detected that the driver has triggered a parking event (e.g., the gear is switched from the forward gear to the parking gear). In a specific implementation, the gear shift event occurring in the vehicle is determined by monitoring gear information of the vehicle, where the gear information of the vehicle may be detected by a sensor on the vehicle or may be detected by an internet platform, and this embodiment is not particularly limited as long as the gear information of the vehicle can be accurately detected; the main purpose of detecting the gear of the vehicle is to determine the vehicle state. When the gear information of the vehicle is detected to be a driving gear, the vehicle can be indicated to be in a driving state, and the second driving information which is not displayed on the vehicle display screen in the state does not contain multimedia information, so that the driving safety of a driver is ensured. When the gear information of the vehicle is detected to be a parking gear, the vehicle can be indicated to be in a non-driving state, the speed is zero, and the second driving information displayed in the state can contain multimedia information so as to provide entertainment services for drivers and passengers.
The contents included in the driving gear and the parking gear may be different for different types of vehicles. For example, in the case of an automatic-shift vehicle, the drive range may include a D range (forward range) and an R range (reverse range), and the parking range may include a P range (parking range) and an N range (neutral range). In the following, the implementation that the second driving information displayed on the vehicle display screen includes the window element corresponding to the push information will be described by taking the driving gear as the D gear and the parking gear as the P gear as examples.
For example, as shown in fig. 3D, when the gear of the vehicle is in a forward gear (D gear), the vehicle is in a driving state, at this time, the first identifier 21 that follows the model 12 in the simulation image to drive is displayed on the vehicle display screen a, and at this time, to ensure driving safety, the vehicle does not push information (i.e., the second driving information is displayed without a window element corresponding to the pushed information); when the vehicle is switched from the D gear to the parking gear (P gear), in response to the first gear-shifting event, the vehicle executes an information push program, and displays at least one window element 30 corresponding to the push information on the vehicle display screen (that is, the displayed second driving information includes the window element corresponding to the push information), and the first identifier 21 stops following and is displayed in a static manner. Wherein, if there is no window element needing to be visually focused in the displayed at least one window element 30, the displayed at least one window element 30 can block the first identifier 21; if there is a window element 31 that needs to be visually focused in at least one displayed window element 30, the first identifier 21 may be displayed at the window edge of the window element 31.
Further, when a shift of the vehicle from P range to D range (as indicated by the dashed arrow) is detected, in response to this second shift event, a window-disappearing procedure is performed, such that the at least one window element 30 originally presented on the vehicle display screen a disappears, and accordingly the first identifier 21 resumes following.
The above contents are all how to display and describe the first identification angle of the image to be activated corresponding to the voice interaction function under the condition that the voice interaction function is in the state to be activated. In the scheme of the application, after the voice interaction function is converted from the state to be activated into the activated state, the voice interaction function also has different display states aiming at different voice interaction stages in the voice interaction process with a driver and a passenger in a vehicle. Specifically, the driver (including the driver and the passenger) sends out an activation voice for the voice interaction function of the vehicle in the to-be-activated state, and the voice interaction function is activated and is in the activated state by the vehicle in response to the activation voice sent out by the driver. The activation voice (also called wake-up voice) includes an activation word (also called wake-up word) corresponding to the voice interaction function, and the activation word may be a default activation word or may also be a custom activation word, which is not limited herein. For example, the activation word may be "Mars one", "little fly", or the like. When the voice interaction function is in an activated state, the driver and the vehicle can carry out voice interaction through the voice interaction function.
In order to make the driver clearly know the voice interaction stage in the voice interaction process through vision, so as to improve the usability of the voice interaction, the present embodiment displays a second identifier (such as the second identifier 20 shown in fig. 5 a) of the voice interaction function adapted to the voice interaction stage on the vehicle display screen for different voice interaction stages, where the second identifier may be in a two-dimensional or three-dimensional form, and specifically, may be, for example, a polygonal icon without limitation to two-dimensional or three-dimensional form, as with the first identifier. Based on this, the method provided by this embodiment may further include the following steps:
201. responding to an activated voice sent by a first user aiming at the voice interaction function, and determining a voice interaction stage corresponding to the first user;
202. displaying a second identifier adapted to the voice interaction stage corresponding to the first user;
wherein the second identifier is used for prompting that the voice interaction function is in an activated state.
For the above description of activating speech, see the above related contents. It should be added that the user (e.g., the first user or the second user described below) described in this embodiment refers to the driver or passengers described in this embodiment, wherein the driver or passengers may include drivers (e.g., the primary driver or the secondary driver) in the driving seat, and drivers in the rear row.
Generally, the voice interaction process includes, but is not limited to, the following stages: a radio receiving stage, a listening stage, a thinking stage, a voice broadcasting stage and a full duplex stage. Wherein, the radio reception stage refers to the stage that the voice interaction function is activated but the driver does not speak; the listening stage refers to a user speaking and a voice collection stage for the voice sent by the user; the thinking phase refers to a phase of recognizing and analyzing voice sent by a user; the voice broadcasting stage is to broadcast the execution result to the driver and the passenger according to the voice recognition result when the function is executed; in the full-duplex stage, voice interaction can be performed without waking up, and the voice interaction can adopt dual-channel voice communication, random call insertion and the like, for example, voice broadcast is currently performed, and voice sent by a driver and a passenger can still be collected and identified. Based on the above-mentioned several stages of the voice interaction process, the following describes, with reference to fig. 5a and 5b, a specific form of the second identifier 20 of the voice interaction function that is adapted to the voice interaction stage and displayed for different voice interaction stages.
As described with reference to fig. 5a and 5b, when the voice interactive function of the vehicle is in the waiting activation state, the first identifier corresponding to the voice interactive function displayed on the display screen of the vehicle may be a plurality of first identifiers 21 arranged in sequence. Further, if the primary driver sends an activation voice for the voice interaction function, the vehicle responds to the activation voice, the voice interaction function is activated, and when the user speaking is not detected after activation, it is determined that the current voice interaction stage is in the sound reception stage, and at this time, an identifier 23 (or an identifier 23 ') adapted to the sound reception stage is displayed on a display screen of the vehicle in a dynamic manner, where the identifier 23 (or the identifier 23') is used to prompt the primary driver that the voice interaction function is activated and the primary driver speaking is not detected; in particular, the dynamic display manner of the logo 23 may be, but not limited to, regular rhythm of actionable elements in the logo 23 (e.g., a plurality of elements 231 with different lengths in the shape of vertical lines in the logo 23) to present the animation effect of receiving the sound. Further, the color transparency of the actionable element in logo 23 is a first numerical value when rendered. The dynamic display mode of the mark 23 'may be, but is not limited to, rotation, such as left-right rotation of the mark 23'.
When the primary driver is detected to speak, for example, when the voice "navigation" uttered by the primary driver is detected, it is determined that the current voice interaction stage is in the listening stage for listening to the user speaking, and at this time, the displayed identifier 23 (or identifier 23 ') is updated to the identifier 24 (or identifier 24 ') adapted to the listening stage, and the identifier 24 (or identifier 24 ') is displayed in a dynamic manner. Wherein, the dynamic display manner of the logo 24 can be, but not limited to, making the actionable elements in the logo 24 (such as the plurality of elements 241 with different lengths in the shape of vertical bars in the logo 23) pulse along with the speech uttered by the main driver's speech to present the animation effect of listening to the speech, and when presented, the color transparency of the logo 24 is the second value; the dynamic display mode of the indicator 24' may be, but is not limited to, that the indicator gradually becomes larger, and the indicator is displayed in a rotating manner when the larger degree reaches a preset requirement. At this stage, a Voice Activity Detection (VAD) function corresponding to the Voice interaction function is valid. VAD is a driving speech signal processing technology, and simply speaking, VAD separates an effective speech signal from a useless speech signal or a noise signal, so that subsequent work such as speaker recognition, semantic recognition and speech emotion analysis is more efficient, and is a very necessary and critical link in a speech processing process.
Further, if the primary driver's speech is not detected within a certain period of time (e.g. 3 seconds), it is determined that the primary driver's speech is ended, the current speech interaction phase enters a thinking phase (also called a loading phase), and at this time, the displayed identifier 24 (or identifier 24 ') is updated to the identifier 25 (or identifier 25 ') adapted to the listening phase, and the identifier 25 (or identifier 25 ') is displayed in a dynamic manner. The dynamic display mode of the sign 25 may be, but is not limited to, making actionable elements (e.g., a plurality of elements 251 with different lengths and in the shape of diagonal lines in the sign 25) in the sign 25 present a loading action, so as to present action of performing speech semantic recognition analysis on the collected speech uttered by the main driver and/or performing related functions according to the speech recognition result. The technique used for speech recognition may be, but is not limited to, natural Language Understanding (NLU) technique. Further, the color transparency of the logo 25 is a third numerical value when rendered. The dynamic display of the logo 25 'may be, but is not limited to, regular rhythm with the two arrows in the logo 25'.
Still further, after the speech semantic recognition analysis is finished, it may be determined that the current speech interaction stage enters the speech broadcast stage, and at this time, the displayed identifier 25 (or identifier 25 ') is updated to the identifier 26 (or identifier 26 ') adapted to the speech broadcast stage, and the identifier 26 (or identifier 26 ') is displayed in a dynamic manner. Specifically, if it is determined that the voice recognition is successful and/or the execution is successful based on the voice recognition result and/or the execution result of the related function executed according to the voice recognition result, the displayed identifier 25 (or identifier 25 ') may be updated to be an identifier 261 (or identifier 261 ') adapted to the voice broadcast stage, and an actionable element (e.g., a plurality of elements 261 with different lengths in a vertical line shape in the identifier 261) in the identifier 261 (or identifier 261 ') may follow the voice broadcast rhythm to present the action of the forward voice broadcast state; alternatively, it is possible to update the displayed flag 25' to a flag 261' adapted to the voice announcement phase and to rotate the flag 261' to be displayed to present an effect of a forward voice announcement state. If any one of failure of voice recognition, failure of function execution, success of voice recognition and failure of function execution is determined based on the voice recognition result and/or the execution result of the function executed according to the voice recognition result, the displayed identifier 25 can be updated to the identifier 262 adapted to the voice broadcast stage, and the actionable elements in the identifier 262 (such as the two-dimensional character head 2621 in the identifier 262) are controlled to perform negative actions such as shaking head, crying and the like, so as to present the action effect of a negative voice broadcast state; or, the displayed mark 25' can be updated to a mark 262' adapted to the voice broadcast stage, and the action of the operable broken line arrow in the mark 262' is performed like a wave, so as to present the action effect of the negative voice broadcast state. Further, the color transparency of the indicia 26 is a fourth numerical value when rendered. The identifier 26 corresponding to the voice interaction function displayed in the voice broadcasting stage can make the primary driver feel different emotions of the voice interaction function during voice interaction through vision and animation, so as to better establish emotional connection between the user and the voice interaction function.
After the voice broadcasting is finished, it is determined that the voice interaction between the current round and the primary driver is finished, the current voice interaction stage enters a full-duplex stage, and at this time, the displayed mark 26 (or mark 26 ') is updated to a mark 27 (or mark 27') adapted to the full-duplex stage, wherein the color transparency of the mark 27 is the fifth number. The full-duplex common stage corresponding to the identifier 27' is further divided into a full-duplex receiving stage, a full-duplex listening stage and a full-duplex loading stage, and the three stages have similar functions to the receiving stage, the listening stage and the thinking stage, but are different in that voice interaction is free from awakening, and can support inter-talk and multi-voice communication channels; after the current round of voice interaction is finished, the full-duplex radio receiving section is determined to enter, the displayed identifier 26' is updated to the identifier 271' matched with the full-duplex radio receiving section, and the identifier 271' can be but is not limited to be displayed in a rotating dynamic mode; further, if it is detected that the primary driver utters the voice again, it is determined to enter the full-duplex listening segment, and the displayed identifier 271 is updated to the identifier 272 adapted to the full-duplex listening segment, where the identifier 271 may be, but is not limited to, displayed in a rotating dynamic manner; after the listening is finished, the full-duplex loading segment is determined to enter, the displayed identifier 271 is updated to the identifier 273 adapted to the full-duplex loading segment, and the dynamic display mode of the identifier 273 may be, but does not display, making two arrows in different directions in the identifier 273 perform regular rhythm.
After the voice interaction stage enters the full-duplex stage, the full-duplex stage exit countdown program is started to be executed, when the countdown is finished, the fact that the driver gives out the voice is not detected again, the voice exit condition is determined to be met, and the voice interaction function enters the state to be activated. If the driver and the passenger are detected to send out the voice again when the countdown is finished, the condition that the voice quitting is not met is judged, the full-duplex stage is continuously kept, and the driver and the passenger are identified to send out the voice.
Here, it should be noted that the first to fifth numerical values may be the same or different. In this embodiment, the first and fifth values are larger relative to the second to fourth values in order to visually weaken the corresponding marks to reduce the user's annoyance.
Further, the voice interaction phase corresponding to the voice interaction function may further include a voice disable phase in addition to the above-described several phases, and accordingly, the display style of the second identifier of the voice interaction function adapted to the voice disable phase may be, but is not limited to, as shown in fig. 5c (being a static display), as long as it can be presented that the voice interaction function is not available, being a static display manner. The reasons for the unavailability of the voice interaction function are various, such as in-call, out-of-vehicle voice activation, and the like, and are not limited herein.
It should be added that, in the above-mentioned process of performing voice interaction, after determining that the voice interaction stage enters the listening stage, a text box may be displayed near the displayed second identifier corresponding to the voice interaction function adapted to the listening stage, so as to display text content corresponding to the listened voice in the text box; the text content displayed in the text box may be, but is not limited to, a text converted from the voice uttered by the listened-to driver by using an Automatic Speech Recognition (ASR) technology. Specifically, the text may be displayed statically in a text box by using a certain text alignment manner (e.g., left text alignment or right text alignment) first, and when the text in the text box 28 is fully displayed, the text may be displayed in a scrolling manner (e.g., left scrolling or right scrolling), for example, referring to an example shown in fig. 5d in which a text box 28 is displayed on the right side of the displayed identifier 24 corresponding to the voice interaction function adapted to the listening stage, and the text is displayed in the text box 28 by first displaying the text corresponding to the listened voice in a text left alignment manner, such as "navigate", "navigate to remove the rainbow bridge airport", and the like, and when the text in the text box 28 is fully displayed, the text box is switched to a left scrolling manner, such as "hang tai ji. Further, until the listening is finished, after entering the thinking phase, only the originally displayed mark 24 may be updated to be the mark 25 adapted to the thinking phase, the displayed text box and the text content displayed therein may not be updated, and the text is still displayed continuously in a dynamic manner. Further, after entering the voice broadcasting stage, the originally displayed identifier 25 may be updated to the identifier 26 adapted to the voice broadcasting stage, and corresponding voice broadcasting contents may be displayed in the displayed text box 28 along with the voice broadcasting contents; or, the text box and the text content displayed therein may not be updated, and is not limited herein. After the voice broadcast stage, this round of voice interaction finishes promptly, correspondingly, the voice interaction stage gets into the full duplex stage, and the following detailed description shows at the text box that the full duplex stage corresponds:
a mode of displaying a text box is non-real-time display, namely, text contents corresponding to recognized voice are not real-time displayed in the process of recognizing the voice sent by a driver, and the text box is displayed only in a voice broadcasting stage so as to display corresponding voice broadcasting contents in the text box.
For example, referring to fig. 6a, in the full-duplex stage, when the driver is detected to speak to "air condition 26 degrees", the voice uttered by the driver is recognized without displaying a text box near the displayed identifier 27 corresponding to the voice interaction function adapted to the full-duplex stage, but when the voice uttered by the driver is recognized to semantically match the preset content and the corresponding function is executed according to the semantic recognition result (e.g. the air condition is successfully adjusted to 26 degrees), when the voice announcement is started according to the execution result, a text box 28 is displayed, for example, a text box 28 is displayed on the right side of the identifier 26 corresponding to the voice interaction function adapted to the voice announcement stage, and "air condition 26 degrees" is displayed in the text box 28. After the voice broadcasting is finished, the full duplex stage is entered again, at this time, only the identifier 27 is displayed, and the text box is not displayed; in addition, the full-duplex exit countdown procedure is re-executed.
Alternatively, as shown in fig. 6b, after the above example voice broadcast is finished, the listening stage may be started, the text box 28 is continuously displayed, and a text prompt that the driver can continuously send out voice is displayed in the text box 28, such as "i still listen"; the text box 28 is shown to disappear after a period of time (e.g. 2 s) and execution proceeds again to the full duplex phase.
For another example, referring to fig. 6c, in the full-duplex stage, when detecting that the driving passenger chats, the driving passenger utters a voice, and at this time, the full-duplex stage is interrupted to exit the countdown procedure, so as to recognize the voice uttered by the driving passenger; wherein during speech recognition no text box is displayed near the displayed identification 27 corresponding to the speech interaction function adapted to the full duplex stage. Further, when it is recognized that the content of the voice uttered by the driver does not semantically match the preset content, a text box 28 is displayed on, for example, the right side of the identifier 27, and the voice uttered by the user is converted into a text (e.g., a joke) according to the voice recognition result and displayed in the text box 28, wherein the display color of the text may be a color without significance, such as gray, so as to present an effect of rejecting the voice uttered by the user. The text box 28 shows that the preset time period (e.g., 2 s) has elapsed, full duplex is continued, and the interrupt execution full duplex phase exit countdown routine is continued.
Another way to display the text box is to display the text box in real time, that is, when it is detected that the driver utters a voice, the text box is displayed, and a text corresponding to the recognized voice is displayed in the text box.
For example, referring to fig. 6d, in the full-duplex phase, when it is detected that the occupant speaks to speak the voice "air condition 26 degrees", a text box 28 is displayed near the displayed logo 27 corresponding to the voice interaction function adapted to the full-duplex phase, and "air condition 26 degrees" is displayed in the text box. When the thinking stage and the voice broadcasting stage are entered, the text box 28 is displayed, and the "air conditioner 26 degrees" is displayed in the text box 28. After the voice broadcast is finished, the text box 28 disappears, and the full duplex stage is entered.
For another example, referring to fig. 6e, in the full-duplex phase, when detecting that the driver has spoken the voice "a pocketed for a moment" while chatting, a text box 28 is displayed, e.g. to the right of the displayed identification 27 of the voice interaction function adapted to the full-duplex phase, and "a pocketed for a moment" is displayed in the text box. Subsequently, when the thought phase is entered, the text box 28 is still displayed and "a pocket of wind" is displayed in the text box. After recognition and analysis, when it is recognized that the voice content uttered by the driver does not semantically match the preset content, the full duplex stage is performed, and the text box 28 is also displayed for a period of time (e.g. 2 s), wherein the text "a lot of wind" is displayed in a color without significance when displayed, e.g. gray, so as to present the voice effect of rejecting the user. The text box 28 disappears as shown after 2 s.
From the above, the scheme provided by this embodiment may further include:
203. and when the first user is monitored to send a voice instruction, a text box is displayed on one side of the second identifier, and a text corresponding to the voice instruction is presented in the text box.
In order to facilitate the driving passenger to visually clear the current voice interaction stage and the corresponding voice interaction content, in this embodiment, the vehicle display screen is divided into multiple display areas, and part of the display areas in the multiple display areas has a corresponding relationship with the spatial position in the vehicle; in the voice interaction process, the display position of the second identifier 20 corresponding to the voice interaction function is determined according to the spatial position of the driving passenger in the vehicle by combining the corresponding relation, and the second identifier of the voice interaction function matched with the voice interaction function stage is displayed according to the determined display position.
Fig. 7a shows an example of dividing the vehicle display screen into four display regions (display region A1 to display region A4). The display area A1 is mainly used for displaying analog images; the display area A2 corresponds to a spatial position corresponding to a main driving position in the vehicle, and is used for displaying interaction information which needs to be displayed when the main driver interacts with the vehicle (such as voice interaction) in the vehicle; the display area A3 corresponds to a spatial position corresponding to a copilot in the vehicle and is used for displaying interaction information which needs to be displayed when the copilot in the vehicle interacts with the vehicle; the display area A4 has a corresponding relation with a space position corresponding to a rear-row seating position in the vehicle and is used for displaying interaction information which needs to be displayed when a rear-row passenger interacts with the vehicle. More specifically, taking the display area A2 as an example, referring to fig. 7b, the display area A2 is further divided into a voice area a21 and an application area a22, where the voice area a21 is used to display a second identifier corresponding to a voice interaction function that needs to be displayed when the primary driver performs voice interaction with the vehicle, a voice interaction result, and the like, and the application area a22 is used to display application-related content that needs to be presented when the primary driver interacts with the vehicle, such as an application window (which is a man-machine interaction interface when the application program runs). It should be added that the speech area a21 and the application area a22 are capable of being panned and zoomed, so that, for example, when the speech interaction result to be presented cannot be fully presented in the current speech area a21, the application area a22 can be panned to the left in the left horizontal direction to reduce the application area a22, and the speech area a21 can be panned to the left to enlarge the speech area a21.
For further dividing the display area A3 and the display area A4, reference may be made to the above-mentioned further division of the display area A2, and details thereof are not repeated here.
In the following, a description is given to a principle of displaying a second identifier corresponding to a voice interaction function and a voice interaction result in a voice interaction process by taking several examples in conjunction with the above-mentioned correspondence between multiple display areas on a vehicle display screen and spatial positions in a vehicle.
Example 51, described in conjunction with fig. 8 a-8 c, supports a one-person speaking mode from the perspective of the vehicle's voice interaction functionality.
By one-person speaking mode is understood that the voice interaction functionality supports voice interaction with only one driver.
Example 511, the display is performed in a display area corresponding to the spatial position of the driver in the vehicle
Referring to fig. 8a, assuming that the primary driver utters an activation voice for the voice interaction function, by analyzing the acquired activation voice, the spatial position of the primary driver in the vehicle (i.e. the spatial position corresponding to the primary driving seat in the vehicle) can be determined; based on the pre-established correspondence between the display area of the vehicle display screen and the spatial position in the vehicle, it may be determined that the display area on the vehicle display screen corresponding to the spatial position in the vehicle of the primary driver who utters the activation voice is the display area A2, that is, the second identifier corresponding to the voice interaction function may be displayed on the display area A2, for example, but not limited to, the second identifier 20 corresponding to the voice interaction function may be displayed at a position on the upper left corner of the voice area a21 on the display area A2. Thereafter, if speech uttered by the primary driver is detected, a text box may also be displayed at, but not limited to, the left side of the second indicator 20 to translate the speech uttered by the primary driver into text for presentation in the text box. Further, if the voice interaction result is required to be presented later, the voice interaction result can be presented at but not limited to the position below the second identifier 20 (for example, the seat heating result card 6)
It should be noted that the activated voice uttered by the primary driver for the voice interaction function may be acquired by the vehicle-mounted microphone array system. The vehicle-mounted microphone array system can adopt a time delay estimation technology to realize sound source positioning, specifically, time differences of activated voices reaching different microphone arrays respectively can be calculated, and then the spatial position of the main driver in the vehicle is determined at least based on the calculated time differences and the positions of the different microphone arrays.
Example 512, if the corresponding display of voice interaction cannot be completed on the corresponding display area
Referring to fig. 8b, assuming that the secondary driver utters the activation voice for the voice interaction function, accordingly, the contents that the secondary driver needs to display for the voice interaction process with the vehicle are preferably presented in the display area A3. If the co-driver utters a voice such as "open the car control" in the voice interaction process, the voice interaction result (i.e., the interface content corresponding to the car control) cannot be displayed on the display area A3 due to the limitation of the application area in the display area A3, and at this time, the voice interaction result can be presented on the display area A2.
Example 513 post-triggered Voice interaction interrupts a preamble Voice interaction
Referring to fig. 8c, if it is determined that the spatial position of the current driver in the vehicle corresponds to the main driving position based on the activated voice uttered by the driver, the voice interaction contents to be displayed, such as the second identifier 20 corresponding to the voice interaction function, a text box (not shown) for displaying text, and the like, are displayed on the display area A2. If the spatial position of the driver in the vehicle is determined to correspond to the co-driver seat based on the activated voice uttered by the current driver, the voice interaction contents to be displayed, such as the second identifier 20 corresponding to the voice interaction function, the text box for displaying the text, and the like, are correspondingly displayed on the display area A3. If the spatial position of the current driver in the vehicle is determined to correspond to the rear-row seating position based on the activated voice uttered by the current driver, the second identifier 20 corresponding to the voice interaction function, the text box for displaying the text, and other voice interaction contents to be displayed are correspondingly displayed on the display area A4.
Further, if it is detected that other driving passengers subsequently make an activation voice, the current voice interaction is interrupted, and the display position of the second identifier 20 and the like is changed. For example, in response to the above, in the case where the voice interaction with the primary driver is currently performed, the second indicator 20 is displayed on the display area A2, and when it is detected that the secondary driver utters the activation voice during the voice interaction with the primary driver, the voice interaction with the primary driver is stopped, the voice interaction with the secondary driver is executed, and the voice interaction contents to be displayed, such as the second indicator 20 corresponding to the voice interaction function, are displayed on the display area A3.
From the content related to the foregoing example 51, the foregoing 202 "displaying the second identifier of the voice interaction function adapted to the voice interaction stage of the first user" in this embodiment may specifically include:
2021. determining the position of the first user in the vehicle according to the activating voice sent by the first user;
2022. determining a display position of the second identifier based on the orientation;
2023. and displaying the second mark at the display position of the second mark.
Example 52, described in conjunction with fig. 9 a-9 f, supports a two-person speaking mode from the perspective of the vehicle's voice interaction functionality.
By two-person speaking mode is understood that the voice interaction function can at most simultaneously voice interact with two riders. When the voice interaction function supports a multi-person speaking mode (such as a two-person speaking mode), the voice interaction contents which need to be displayed, such as the second identifier corresponding to the voice interaction function, a text box used for presenting a text, and the like, can be displayed on the corresponding display areas.
Example 521 post-triggered Voice interaction does not disrupt preamble Voice interaction
For example, referring to fig. 9a, initially, the primary driver utters an activation voice for the voice interaction function, and during the voice interaction with the primary driver, the second identifier 20 corresponding to the voice interaction function, a text box (not shown in the figure) for displaying text, and other voice interaction contents that need to be displayed are presented on the display area A2 according to the spatial position of the primary driver in the vehicle. During the voice interaction with the primary driver, if the secondary driver is detected to make an active voice, the vehicle can respond to the voice interaction triggered by the secondary driver while maintaining the voice interaction with the primary driver, and according to the spatial position of the secondary driver in the vehicle, contents such as a second identifier 20, a text box for displaying text and the like which need to be displayed in the voice interaction with the secondary driver are also displayed on the display area A3. Accordingly, the contents of the second identifier 20 and the like are shown on both the display area A2 and the display area A3, but the contents shown on the display area A2 and the display area A3 may be different or the same due to the difference of the voice interaction object, the voice interaction progress or the contents. For example, the style of the second identifier 20 displayed on the display area A2 and the display area A3 may be the same or different, which is related to the voice interaction stage where the corresponding voice interaction function is located.
It should be noted here that in the case where the voice interaction function supports, for example, two speaking modes, the voice interaction corresponding to the primary driver has the highest priority and cannot be interrupted. For example, assuming that the vehicle is performing a voice navigation task (e.g., navigating to a hong bridge airport task) for voice interaction with the primary driver, the secondary driver utters an activation voice and issues a navigation voice "navigate to a platoon airport", and the vehicle will cause the secondary indicator 20 displayed on the display area A3 to exhibit a negative voice announcement action and the text box 28 to exhibit a rejection action in response to the navigation voice "navigate to a platoon airport" uttered by the secondary driver. For the co-driver and the rear passenger, the voice interaction triggered at the rear has high priority, and the previous voice interaction conversation can be interrupted. For example, with continued reference to fig. 9a, assuming that voice interaction with the primary driver and the secondary driver is currently maintained, if it is detected that the activation voice is uttered by the rear passenger again, in this case, the voice interaction corresponding to the secondary driver would be interrupted, the content of the second indicator 20 and the like that needs to be presented in voice interaction with the primary driver is continuously maintained in the display area A2, the content of the second indicator 20 and the like that is presented in the display area A3 disappears, and accordingly the content of the second indicator 20 and the like that needs to be presented in voice interaction with the rear passenger is presented in the display area A4.
Example 522, when the voice interaction results corresponding to the two voice interactions point to the same voice function card, the voice function card is displayed on the corresponding display area, and the contents of the two displayed same function cards are also the same.
Referring to fig. 9c, assume that the primary driver has first triggered a voice interaction with the vehicle and that the vehicle presents the seat heating card 61 on the display area A2 of the vehicle display screen for the results of the voice interaction with the primary driver. Further, the co-driver subsequently triggers a voice interaction with the vehicle and issues a voice "seat heating second gear"; in response to the voice "seat heating second gear" sent by the secondary driver, it is determined that the function card to which the voice interaction result points is the same as the function card to which the voice interaction result points of the primary driver, that is, both the function cards are seat heating cards, at this time, the seat heating card is also presented on the display area A3 of the vehicle display screen, and the seat heating card originally presented on the display area A2 is updated, so that the contents of the two seat heating cards presented on the display area A2 and the display area A3 are the same, and the contents are the contents corresponding to the latest seat heating adjustment action after the latest seat heating adjustment action is performed, for example, the mark element 611 of the seat corresponding to the primary driving position and the mark element 612 of the seat corresponding to the secondary driving position in the two seat heating cards, and the corresponding heating gear information is both "2".
For example 523, when the voice interaction results corresponding to the two voice interactions point to the same application, the voice interaction result corresponding to the voice interaction that is triggered later is executed in the opened application.
Referring to FIG. 9d, assume that the primary driver has first triggered a voice interaction with the vehicle and that the vehicle has concluded with the voice interaction with the primary driver an open music application is presented on display area A2. Further, the co-driver subsequently triggers voice interaction with the vehicle and sends out a voice "play the second head"; in response to the voice "play second" uttered by the co-driver, the content played in accordance with the "play second" instruction is displayed within the music application already presented on the display area A2.
And 524, during voice broadcasting of one path of voice interaction, if another path of voice interaction also needs to be subjected to voice broadcasting and the priority of the another path of voice interaction is lower, aiming at the another path of voice interaction, prompting voice is used to replace the voice broadcasting.
Referring to fig. 9e, assume that both the primary driver and the secondary driver are engaged in a voice interaction with the vehicle, wherein the voice interaction of the secondary driver with the vehicle is post-triggered relative to the voice interaction of the primary driver with the vehicle. In this case, since the priority of the voice interaction between the co-driver and the vehicle is lower than that of the voice interaction between the primary driver and the vehicle, the voice broadcast performed on the voice interaction result with the primary driver is not interrupted, but prompt tones are used to replace the voice broadcast performed on the voice interaction result with the co-driver. Specifically, if the result of the voice interaction with the co-driver is that the execution is successful, a forward prompt tone (for example, a prompt tone with a cheerful emotion) is adopted; otherwise, if the execution fails or the voice uttered by the co-driver cannot be understood, a negative tone (e.g., a tone with emotional distress) is used.
Example 525, during the period of performing the voice broadcast by one path of voice interaction, if there is another path of voice interaction that also needs to perform the voice broadcast, and the priority of the another path of voice interaction is higher, the ongoing voice broadcast is interrupted, and the voice broadcast is executed for the another path of voice interaction.
Referring to fig. 9f, assuming that the secondary driver triggers the voice interaction with the vehicle first, during the period of the vehicle performing the voice broadcast on the voice interaction result with the secondary driver, the vehicle determines that the voice broadcast also needs to be performed on the voice interaction result with the primary driver in response to the voice interaction triggered by the primary driver, in this case, since the priority of the voice interaction with the vehicle by the primary driver is higher than that of the voice interaction with the vehicle by the secondary driver, the voice broadcast on the voice interaction result with the secondary driver is interrupted, and the voice broadcast on the voice interaction result with the primary driver is performed.
In summary of the content related to the examples 51 and 52, the method provided by this embodiment may further include:
204. when detecting that the second user sends an activated voice aiming at the voice interaction function, acquiring a voice interaction mode supported by the voice interaction function;
205. if the voice interaction mode is a one-person interaction mode, displaying a second identifier of the voice interaction function adapted to the voice interaction stage corresponding to the second user, and enabling the second identifier adapted to the voice interaction stage corresponding to the first user to disappear;
206. and if the voice interaction mode is a multi-user interaction mode, keeping displaying the second identifier adaptive to the voice interaction stage of the first user, and displaying the second identifier adaptive to the voice interaction stage of the second user.
For the above description of the one-person interaction mode and the multi-person interaction mode, reference may be made to the one-person speaking mode and the two speaking modes described in the above examples, which are not described herein again. And, the specific implementation step related to displaying the second identifier of the voice interaction function adapted to the voice interaction stage of the second user may refer to the content related to displaying the second identifier of the voice interaction function adapted to the voice interaction stage of the first user, which is described above and is not described herein again.
Further, if the driver and the passenger think of quitting interacting with the vehicle voice, the method can be realized by adopting any one of the following modes:
the first method is as follows: and sending an exit voice, wherein the exit voice comprises voice interaction exit words such as 'exit voice', 'you go to the bar' and the like.
And the vehicle responds to the exit voice sent by the driver and the passenger, the voice interaction is finished, and correspondingly, the second identifier corresponding to the voice interaction function displayed on the vehicle display screen disappears.
The second mode is by pressing an exit key provided on the vehicle.
For example, as shown in fig. 10, when the voice interaction is in a voice broadcast stage, a driver and a passenger may end the voice interaction of the current wheel by pressing a mute key provided on a steering wheel of the vehicle; whether the voice interaction exits or not is determined according to the actual setting condition of the driver, and if the voice interaction exits or the full duplex stage also exits, the method is not limited herein.
In summary, the technical scheme provided by this embodiment has the following beneficial effects:
1. and displaying corresponding second driving information and a first identifier of the voice interaction function of the vehicle on the vehicle according to the collected first driving information of the vehicle, wherein the first identifier is used for prompting that the voice interaction function is in a state to be activated, and the display state of the first identifier is determined based on the second driving information. Therefore, the scheme enables a user to be in a waiting activation state through the voice interaction function of the visual perception vehicle, enables the voice interaction function to correspond to the first identification of the waiting activation state, displays the display state based on the second driving information, can increase the diversity and the interestingness of the display of the first identification, and is beneficial to bringing better visual experience for the user.
2. The display mode of the voice interaction function corresponding to the image to be activated can be determined according to all display information displayed on the vehicle; displaying a first identifier of an image to be activated of the voice interaction function on the vehicle according to the determined display mode; for example, the first identifier may be displayed in association with different types of target information displayed on the vehicle in different display manners. The first identifier is used for prompting visually, can be used for prompting a voice interaction function on the vehicle to wait for activation, can be used for prompting the position of the target information, and the like. It is thus clear that this scheme can make the user be in the state of waiting for awakening up through the voice interaction function of visual perception vehicle, and because the show mode is based on that all show contents that show on the vehicle are definite, different show contents can lead to determining different show modes, and this does benefit to the variety and the interest that increase the sign demonstration, does benefit to and brings better visual experience for the user.
3. In addition to displaying the first identification of the voice interaction function on the vehicle (for prompting that the voice interaction function is in the waiting activation state), the scheme can also display a second identification which represents that the voice interaction function is activated in response to the activated voice which is emitted by the driver for the voice interaction function, and the display state of the second identification is matched with the voice interaction stage of the vehicle and the user. Therefore, the scheme can enable the user to perceive the voice interaction function of the vehicle to be in the waiting awakening state through vision on one hand, and enable the user to clearly and clearly determine the current voice interaction stage through vision on the other hand, so that the usability of the voice interaction function is improved.
Fig. 11 illustrates an interface display device provided in a vehicle having display and voice interaction functions according to an embodiment of the present application. As shown in fig. 11, the interface display apparatus includes: an acquisition module 31 and a display module 32; wherein the content of the first and second substances,
the acquisition module 31 is used for acquiring first driving information of the vehicle;
the display module 32 is configured to display, according to the acquired first driving information, second driving information and the first identifier of the voice interaction function on the vehicle;
the first identification is used for prompting that the voice interaction function is in a state to be activated, and the display state of the first identification is determined based on the second driving information.
Further, when determining the display state of the first identifier based on the second driving information, the display module 32 is specifically configured to: determining target information in the second driving information; and displaying the first identification in association with the target information.
Further, when the target information is the driving state information of the vehicle, correspondingly, when the display module 32 displays the first identifier in association with the target information, the display module is specifically configured to: displaying the first identification which dynamically changes based on the driving state information of the vehicle.
Further, when the display module 32 displays the first identifier that dynamically changes based on the driving state information of the vehicle, it is specifically configured to: determining a moving direction and a moving speed according to the driving state information of the vehicle; displaying the first identifier dynamically changing along the moving direction based on the moving speed.
Further, the driving state information of the vehicle comprises an image simulating driving of the vehicle; correspondingly, when the display module 32 displays the first identifier dynamically changing along the moving direction based on the moving speed, it is specifically configured to: determining a display position of the first marker based on the image; displaying the first mark which is dynamically changed along the moving direction and at the moving speed at the display position of the first mark.
Further, the image comprises a background reflecting the driving environment of the vehicle and a vehicle model simulating the vehicle; correspondingly, when determining the display position of the first identifier based on the image, the display module 32 is specifically configured to:
if no target environment element exists in the background in the image, determining the display position of the first identifier based on the vehicle model;
if a target environment element exists in the background in the image, determining the display position of the first identifier based on the target environment element;
the target environment elements are lanes, road signs, idle parking spaces, idle charging potentials or navigation destinations in the vehicle driving environment.
Further, when determining the display position of the first identifier based on the target environment element, the display module 32 is specifically configured to:
determining the distance between the vehicle model and the target environment element according to navigation data; determining a display position of the first marker based on the distance.
Further, when determining the display position of the first identifier based on the distance, the display module 32 is specifically configured to:
when the distance is smaller than a preset distance, the display position of the first identifier is near the target environment element;
and when the distance is greater than or equal to the preset distance, the display position of the first identifier is far away from the target environment element.
Further, the target information is parking state information of the vehicle. Correspondingly, when the display module 32 displays the first identifier in association with the target information, it is specifically configured to: displaying, on the vehicle, the first identification that guides the vehicle to park.
Further, the target information is window information. Correspondingly, when the display module 32 displays the first identifier in association with the target information, it is specifically configured to: displaying the first identifier with an effect of highlighting the window information.
Further, the target information is multimedia information. Correspondingly, when the display module 32 displays the first identifier in association with the target information, it is specifically configured to: and displaying the first identification interacted with the multimedia information.
Further, the display module 32 is further configured to display the first identifier statically and semi-transparently on the vehicle when the target information is absent in the second driving information.
Further, the apparatus provided in this embodiment further includes an adding module. The adding module is used for:
if the vehicle is in a manual driving mode, adding a first element which is displayed in an associated mode to the first identifier;
if the vehicle is in an auxiliary driving mode, adding a second element which is displayed in an associated mode to the first identifier;
and if the vehicle is in an automatic driving mode, adding a third element which is displayed in an associated mode to the first identifier.
Further, the display module 32 is further configured to determine, in response to an activation voice uttered by a first user for the voice interaction function, a current voice interaction stage corresponding to the first user; displaying a second identifier adapted to the voice interaction stage corresponding to the first user; wherein the second identifier is used for prompting that the voice interaction function is in an activated state.
Further, when the display module 32 displays the second identifier adapted to the voice interaction stage corresponding to the first user, it is specifically configured to: determining a position of the first user in a vehicle according to the activated voice; determining a display position of the second identifier based on the orientation; displaying the second identifier at a display position of the second identifier.
Further, the display module 32 is further configured to monitor that when the first user sends a voice instruction, a text box is displayed on one side of the second identifier, and a text corresponding to the voice instruction is presented in the text box.
Further, the display module 32 is also used for
When detecting that a second user sends an activated voice aiming at the voice interaction function, acquiring a voice interaction mode supported by the voice interaction function;
if the voice interaction mode is a one-person interaction mode, displaying a second identifier of the voice interaction function adaptive to the voice interaction stage corresponding to the second user, and enabling the second identifier adaptive to the voice interaction stage corresponding to the first user to disappear;
and if the voice interaction mode is a multi-user interaction mode, keeping displaying the second identifier adaptive to the voice interaction stage corresponding to the first user, and displaying the second identifier adaptive to the voice interaction stage corresponding to the second user.
Here, it should be noted that: the interface display device provided in the foregoing embodiment may implement the technical solution described in the interface display method embodiment shown in fig. 1, and the principle of specifically implementing each module or unit may refer to corresponding content in the interface display method embodiment shown in fig. 1, which is not described herein again.
Fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 12, the electronic apparatus includes: a memory 81 and a processor 82. Wherein the memory 81 is configured to store one or more computer instructions; the processor 82, coupled to the memory 81, is used for one or more computer instructions (e.g., computer instructions implementing data storage logic) to implement the steps in the interface display method provided in the embodiments of the present application.
The memory 81 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Further, as shown in fig. 12, the electronic device may further include: communication components 83, display 84, power components 85, and audio components 86, among other components. Only some of the components are schematically shown in fig. 12, and the electronic device is not meant to include only the components shown in fig. 12.
In specific implementation, the electronic device may refer to a vehicle, or may refer to a combination of a vehicle and a computer platform, and is not limited in particular herein. Accordingly, the processor 82 described above may refer to a processor in a vehicle, or may refer to a combination of a computer in a vehicle and a corresponding processor of a computer platform, wherein the computer platform may include one or more processors. A processor is a circuit with signal processing capability, and in one implementation, the processor may be a circuit with instruction reading and executing capability, such as a Central Processing Unit (CPU), a microprocessor, a Graphics Processing Unit (GPU) (which may be understood as a kind of microprocessor), or a Digital Signal Processor (DSP); in another implementation, a processor may implement certain functions through the logical relationship of hardware circuits, which may be fixed or reconfigurable, such as a hardware circuit implemented by the processor as an application-specific integrated circuit (ASIC) or a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA). In the reconfigurable hardware circuit, the process of loading the configuration document by the processor to implement the configuration of the hardware circuit may be understood as a process of loading instructions by the processor to implement the functions of some or all of the above units. Furthermore, it may also be a hardware circuit designed for artificial intelligence, which may be understood as an ASIC, such as a neural Network Processing Unit (NPU), a Tensor Processing Unit (TPU), a deep learning processing unit (DPU), or the like. In addition, the computing platform may further include a memory for storing instructions, and some or all of the processors may call the instructions in the memory and execute the instructions to implement the corresponding functions.
It should be understood that, relevant operations of the interface display method provided in this embodiment may be executed by the same processor, or may also be executed by one or more processors, which is not specifically limited in this embodiment of the present application.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program can implement the steps or functions in the data processing method provided in the foregoing embodiments when executed by a computer.
FIG. 13 schematically illustrates a block diagram of a computer program product provided herein. The computer program product comprises computer programs/instructions 91 which, when executed by a processor, such as the processor 82 shown in fig. 12, implement the steps of the interface display method described in the embodiments of the above-mentioned text application.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. Based on the understanding, the above technical solutions substantially or otherwise contributing to the prior art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the various embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (20)

1. An interface display method adapted to a vehicle having display and voice interaction functions, the method comprising:
collecting first driving information of a vehicle;
displaying second driving information and a first identifier of the voice interaction function on the vehicle according to the collected first driving information;
the first identification is used for prompting that the voice interaction function is in a state to be activated, and the display state of the first identification is determined based on the second driving information.
2. The method of claim 1, wherein determining the display status of the first indicator based on the second driving information comprises:
determining target information in the second driving information;
and displaying the first identification in association with the target information.
3. The method according to claim 2, wherein when the target information is driving state information of the vehicle,
displaying the first identification in association with the target information, including:
displaying the first identification which dynamically changes based on the driving state information of the vehicle.
4. The method of claim 3, wherein displaying the first indicator that dynamically changes based on driving state information of the vehicle comprises:
determining a moving direction and a moving speed according to the driving state information of the vehicle;
displaying the first identifier dynamically changing along the moving direction based on the moving speed.
5. The method of claim 4, wherein the driving state information of the vehicle comprises an image simulating driving of the vehicle; and displaying the first identifier dynamically changing along the moving direction based on the moving speed, including:
determining a display position of the first marker based on the image;
displaying the first mark which is dynamically changed along the moving direction and at the moving speed at the display position of the first mark.
6. The method of claim 5, wherein the image comprises a background reflecting a driving environment of a vehicle and a model simulating the vehicle;
determining a display position of a first marker based on the image, including;
if no target environment element exists in the background in the image, determining the display position of the first identifier based on the vehicle model;
if a target environment element exists in the background in the image, determining the display position of the first identifier based on the target environment element;
the target environment elements are lanes, road signs, idle parking spaces, idle charging potentials or navigation destinations in the vehicle driving environment.
7. The method of claim 6, wherein determining the display location of the first marker based on the target environmental element comprises:
determining the distance between the vehicle model and the target environment element according to navigation data;
determining a display position of the first marker based on the distance.
8. The method of claim 7, wherein determining the display location of the first marker based on the distance comprises:
when the distance is smaller than a preset distance, the display position of the first identifier is near the target environment element;
and when the distance is greater than or equal to the preset distance, the display position of the first identifier is far away from the target environment element.
9. The method according to any one of claims 2 to 8, characterized in that the target information is parking status information of the vehicle;
displaying the first identification in association with the target information, including:
displaying, on the vehicle, the first identification that guides the vehicle to park.
10. The method according to any one of claims 2 to 8, wherein the target information is window information; and displaying the first identification in association with the target information, wherein the displaying comprises:
displaying the first identifier with an effect of highlighting the window information.
11. The method according to any one of claims 2 to 8, wherein the target information is multimedia information; and displaying the first identification in association with the target information, wherein the displaying comprises:
and displaying the first identification interacted with the multimedia information.
12. The method of any of claims 2 to 8, further comprising:
and if the second driving information does not contain the target information, displaying the first mark statically and semi-transparently on the vehicle.
13. The method of any one of claims 1 to 8, further comprising:
if the vehicle is in a manual driving mode, adding a first element which is displayed in an associated mode to the first identifier;
if the vehicle is in an auxiliary driving mode, adding a second element which is displayed in an associated mode to the first identifier;
and if the vehicle is in an automatic driving mode, adding a third element which is displayed in an associated mode to the first identifier.
14. The method of any one of claims 1 to 8, further comprising:
responding to an activated voice sent by a first user aiming at the voice interaction function, and determining a voice interaction stage corresponding to the first user;
displaying a second identifier adapted to the voice interaction stage corresponding to the first user;
wherein the second identifier is used for prompting that the voice interaction function is in an activated state.
15. The method of claim 14, wherein displaying the second identifier adapted to the corresponding speech interaction stage of the first user comprises:
determining a position of the first user in a vehicle according to the activated voice;
determining a display position of the second identifier based on the orientation;
and displaying the second mark at the display position of the second mark.
16. The method of claim 14, further comprising:
and when the first user is monitored to send a voice instruction, displaying a text box on one side of the second identifier, and presenting a text corresponding to the voice instruction in the text box.
17. The method of claim 14, further comprising:
when detecting that a second user sends an activated voice aiming at the voice interaction function, acquiring a voice interaction mode supported by the voice interaction function;
if the voice interaction mode is a one-person interaction mode, displaying a second identifier of the voice interaction function adapted to the voice interaction stage corresponding to the second user, and enabling the second identifier adapted to the voice interaction stage corresponding to the first user to disappear;
and if the voice interaction mode is a multi-user interaction mode, keeping displaying the second identifier adaptive to the voice interaction stage corresponding to the first user, and displaying the second identifier adaptive to the voice interaction stage corresponding to the second user.
18. An electronic device, comprising: a memory and a processor, wherein,
the memory for storing one or more computer programs;
the processor, coupled with the memory, to execute the one or more computer programs stored in the memory for implementing the interface display method of any of claims 1-17.
19. A vehicle characterized by comprising a vehicle body and the electronic apparatus according to claim 18, the electronic apparatus being provided on the vehicle body.
20. A computer program product comprising computer programs/instructions for implementing the interface display method according to any one of claims 1 to 17 when executed by a processor.
CN202211498038.7A 2022-11-28 2022-11-28 Interface display method, electronic device, vehicle and computer program product Active CN115534850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211498038.7A CN115534850B (en) 2022-11-28 2022-11-28 Interface display method, electronic device, vehicle and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211498038.7A CN115534850B (en) 2022-11-28 2022-11-28 Interface display method, electronic device, vehicle and computer program product

Publications (2)

Publication Number Publication Date
CN115534850A true CN115534850A (en) 2022-12-30
CN115534850B CN115534850B (en) 2023-05-16

Family

ID=84722464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211498038.7A Active CN115534850B (en) 2022-11-28 2022-11-28 Interface display method, electronic device, vehicle and computer program product

Country Status (1)

Country Link
CN (1) CN115534850B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107921961A (en) * 2015-08-07 2018-04-17 奥迪股份公司 The method and motor vehicle of auxiliary are provided in terms of performing motor-driven vehicle going in high timeliness for driver
CN108735215A (en) * 2018-06-07 2018-11-02 爱驰汽车有限公司 Interactive system for vehicle-mounted voice, method, equipment and storage medium
CN110231863A (en) * 2018-03-06 2019-09-13 阿里巴巴集团控股有限公司 Voice interactive method and mobile unit
CN111824132A (en) * 2020-07-24 2020-10-27 广州小鹏车联网科技有限公司 Parking display method and vehicle
CN112309395A (en) * 2020-09-17 2021-02-02 广汽蔚来新能源汽车科技有限公司 Man-machine conversation method, device, robot, computer device and storage medium
CN113104030A (en) * 2021-05-19 2021-07-13 广州小鹏汽车科技有限公司 Interaction method and device based on automatic driving
CN113782020A (en) * 2021-09-14 2021-12-10 合众新能源汽车有限公司 In-vehicle voice interaction method and system
CN113851126A (en) * 2021-09-22 2021-12-28 思必驰科技股份有限公司 In-vehicle voice interaction method and system
CN115148200A (en) * 2021-03-30 2022-10-04 上海擎感智能科技有限公司 Voice interaction method and system for vehicle, electronic equipment and storage medium
CN115158340A (en) * 2022-08-09 2022-10-11 中国重汽集团济南动力有限公司 Driving assistance system, and control method, device, and medium therefor
CN115273525A (en) * 2022-05-17 2022-11-01 岚图汽车科技有限公司 Parking space mapping display method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107921961A (en) * 2015-08-07 2018-04-17 奥迪股份公司 The method and motor vehicle of auxiliary are provided in terms of performing motor-driven vehicle going in high timeliness for driver
CN110231863A (en) * 2018-03-06 2019-09-13 阿里巴巴集团控股有限公司 Voice interactive method and mobile unit
CN108735215A (en) * 2018-06-07 2018-11-02 爱驰汽车有限公司 Interactive system for vehicle-mounted voice, method, equipment and storage medium
CN111824132A (en) * 2020-07-24 2020-10-27 广州小鹏车联网科技有限公司 Parking display method and vehicle
CN112309395A (en) * 2020-09-17 2021-02-02 广汽蔚来新能源汽车科技有限公司 Man-machine conversation method, device, robot, computer device and storage medium
CN115148200A (en) * 2021-03-30 2022-10-04 上海擎感智能科技有限公司 Voice interaction method and system for vehicle, electronic equipment and storage medium
CN113104030A (en) * 2021-05-19 2021-07-13 广州小鹏汽车科技有限公司 Interaction method and device based on automatic driving
CN113782020A (en) * 2021-09-14 2021-12-10 合众新能源汽车有限公司 In-vehicle voice interaction method and system
CN113851126A (en) * 2021-09-22 2021-12-28 思必驰科技股份有限公司 In-vehicle voice interaction method and system
CN115273525A (en) * 2022-05-17 2022-11-01 岚图汽车科技有限公司 Parking space mapping display method and system
CN115158340A (en) * 2022-08-09 2022-10-11 中国重汽集团济南动力有限公司 Driving assistance system, and control method, device, and medium therefor

Also Published As

Publication number Publication date
CN115534850B (en) 2023-05-16

Similar Documents

Publication Publication Date Title
Weng et al. Conversational in-vehicle dialog systems: The past, present, and future
CN104838335B (en) Use the interaction and management of the equipment of gaze detection
EP3508381B1 (en) Moodroof for augmented media experience in a vehicle cabin
US20110022393A1 (en) Multimode user interface of a driver assistance system for inputting and presentation of information
CN108099790A (en) Driving assistance system based on augmented reality head-up display Yu multi-screen interactive voice
CN110211586A (en) Voice interactive method, device, vehicle and machine readable media
CN111661068B (en) Agent device, method for controlling agent device, and storage medium
WO2022062491A1 (en) Vehicle-mounted smart hardware control method based on smart cockpit, and smart cockpit
CN113302664A (en) Multimodal user interface for a vehicle
JP7275058B2 (en) Experience Delivery System, Experience Delivery Method and Experience Delivery Program
CN107310476A (en) Eye dynamic auxiliary voice interactive method and system based on vehicle-mounted HUD
JP2020061642A (en) Agent system, agent control method, and program
CN112061059B (en) Screen adjusting method and device for vehicle, vehicle and readable storage medium
CN112261432B (en) Live broadcast interaction method and device in vehicle-mounted environment, storage medium and electronic equipment
JP2008026653A (en) On-vehicle navigation device
CN112525214A (en) Interaction method and device for map card, vehicle and readable medium
CN109976515B (en) Information processing method, device, vehicle and computer readable storage medium
CN108762614A (en) Middle control display screen interface switching method, device, storage medium and middle control display screen
CN115329059A (en) Electronic specification retrieval method and device and vehicle machine
WO2021258671A1 (en) Assisted driving interaction method and apparatus based on vehicle-mounted digital human, and storage medium
Chen et al. Eliminating driving distractions: Human-computer interaction with built-in applications
CN115534850B (en) Interface display method, electronic device, vehicle and computer program product
Nakrani Smart car technologies: a comprehensive study of the state of the art with analysis and trends
Hofmann et al. Development of speech-based in-car HMI concepts for information exchange internet apps
Singh Evaluating user-friendly dashboards for driverless vehicles: Evaluation of in-car infotainment in transition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant