CN117157504A - Actively activating an auto-assistant driving mode to obtain varying degrees of confidence in travel detection - Google Patents

Actively activating an auto-assistant driving mode to obtain varying degrees of confidence in travel detection Download PDF

Info

Publication number
CN117157504A
CN117157504A CN202180096633.5A CN202180096633A CN117157504A CN 117157504 A CN117157504 A CN 117157504A CN 202180096633 A CN202180096633 A CN 202180096633A CN 117157504 A CN117157504 A CN 117157504A
Authority
CN
China
Prior art keywords
assistant
user
computing device
vehicle
presented
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180096633.5A
Other languages
Chinese (zh)
Inventor
埃菲·戈埃纳万
大卫·罗比肖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/533,380 external-priority patent/US20230062489A1/en
Application filed by Google LLC filed Critical Google LLC
Priority claimed from PCT/US2021/061237 external-priority patent/WO2023027751A1/en
Publication of CN117157504A publication Critical patent/CN117157504A/en
Pending legal-status Critical Current

Links

Landscapes

  • Navigation (AREA)

Abstract

Embodiments set forth herein relate to an automated assistant capable of operating according to various driving optimization modes depending on the confidence that a user is predicted to be traveling in a vehicle. For example, the automated assistant can automatically operate according to a driving optimization mode when the prediction that the user is traveling corresponds to a certain confidence. Alternatively, when the prediction that the user is traveling corresponds to a lower confidence, the automated assistant may not operate according to the driving optimization mode until the user explicitly selects to transition the automated assistant to the driving optimization mode. When the user selects the driving optimization mode, a driving mode GUI can be presented with a navigation interface, which may include directions to the user's predicted destination, and/or another interface with content suggestions for the user.

Description

Actively activating an auto-assistant driving mode to obtain varying degrees of confidence in travel detection
Background
Humans are able to participate in human-machine conversations through interactive software applications referred to herein as "automated assistants" (also referred to as "digital agents," "chat robots," "interactive personal assistants," "intelligent personal assistants," "assistant applications," "conversation agents," etc.). For example, a human (which may be referred to as a "user" when interacting with an automated assistant) can provide commands and/or requests to the automated assistant using spoken natural language input (i.e., utterances) and/or by providing textual (e.g., typed) natural language input, which in some cases can be converted to text and then processed.
In various situations, such as when a user is traveling in a vehicle, automated assistants and other applications can be accessed through a portable computing device, such as a cell phone and tablet computer. When the driving mode is provided by a particular application (e.g., an automatic assistant application), the driving mode may need to be directly initialized by the user via input of his computing device. However, this may be inconvenient or unsafe when the user is already driving their vehicle. For example, users driving their vehicles may wish to access their automated assistants while receiving navigation instructions from a navigation application. Unfortunately, many automated assistants may not respond to requests during navigation without interrupting the presentation of navigation instructions. Interrupting navigation in this manner can be dangerous to the user because the user attempts to further interact with the automated assistant and/or identify any navigation instructions that the user may miss. Furthermore, when the automated assistant is unaware that the user is driving, the assistant response during driving and/or navigation may be distracting.
Disclosure of Invention
The embodiments set forth herein relate to an automated assistant that actively determines whether a user is traveling in a vehicle and thereafter provides a driving optimized assistant response accordingly. In some embodiments, users are able to drive their vehicles and submit queries to an automated assistant accessible through the user's cell phone. The automated assistant may actively detect that the user is traveling in the vehicle before the user provides the query. Thus, in response to a query from a user, the automated assistant may generate a response according to the determined context in which the user is driving his vehicle. For example, when the query is a spoken utterance such as "assistant, acme assignment," the automated assistant can respond with navigation instructions to the nearest store named "Acme assignment. Thus, when generating a response to a query, the automated assistant may consider that the user may be traveling in a vehicle and/or in a driving mode. For example, if the user is aware that they are unaware of the exact direction of "Acme assignment," although they are going to the general vicinity of the store, the user may provide the query to cause the automated assistant to provide detailed navigation instructions en route. On the other hand, when the user provides this query when the user is no longer driving, the automated assistant may provide internet search results for "Acme assignment," which can include links to "Acme assignment" websites.
In other words, in response to determining that the user may be driving and/or in response to the user device being in a driving mode, embodiments disclosed herein are able to bias natural language understanding and/or fulfillment performed based on the user request toward intent(s) and/or fulfillment that are safer and/or more advantageous to drive. For example, biasing toward a "navigation" intent in a driving mode can result in determining a navigation intent for the above-described "assistant, acme assignment" user request (and providing navigation instructions in response to the request), while not biasing toward a navigation intent (e.g., when not in a driving mode) can result in determining a general search intent for the above-described "assistant, acme assignment" user request (and providing general information about "Acme assignment" in response to the request). Thus, in these and other ways, an automatic assistant response to a request can be dynamically adjusted depending on whether the request originates from a user device in driving mode and/or originates from a user device detected as traveling in a vehicle.
In some implementations, an automated assistant can detect when a user is traveling in a vehicle using its computing device and, in response, cause a display interface of the computing device to present selectable assistant Graphical User Interface (GUI) elements. In some implementations, the prediction that the user is traveling can be characterized by a confidence score determined by an automated assistant. When the user selects a selectable assistant GUI element (e.g., via touch input or spoken utterance), the automated assistant can cause an assistant driving mode GUI to be presented at the display interface. When the confidence score meets a particular confidence threshold, the automated assistant can operate according to the driving mode to process the input and/or generate output in a driving optimized manner, even though the user may not select the selectable assistant GUI element. In some embodiments, when the confidence score meets a particular confidence score threshold and the user dismisses (e.g., swipes) the selectable assistant GUI element, the automated assistant can continue to operate according to the driving mode. However, when the confidence score does not meet the particular confidence score threshold, the automated assistant may not operate according to the driving mode and/or present the assistant driving mode GUI until the user selects the selectable assistant GUI element.
The assistant driving mode GUI can provide one or more options for assisting a user in making a short trip, viewing notifications, and/or otherwise controlling one or more operations of the computing device. For example, the assistant driving mode GUI can provide an indication of a predicted destination for the user and allow the user to select navigation instructions provided to the predicted destination to the user. Alternatively or additionally, the assistant driving mode GUI can provide indications of communication notifications (e.g., incoming messages) and/or predicted media that the user may wish to view during their travel in the vehicle. The assistant driving mode GUI can be presented with characteristics such as font size and color that enable driving optimization and thus will alleviate distraction of the user in driving. In other words, the assistant driving mode GUI can include different feature(s) than the non-driving mode GUI, and these features can be used to reduce the amount of cognition required to interact with the driving mode GUI.
In some embodiments, when it is determined that the user is traveling in a vehicle, the automated assistant can actively adjust various interfaces of the computing device for driving optimization-even if the user does not select the selectable assistant GUI element. For example, when presenting selectable assistant GUI elements in response to detecting that the user is driving a vehicle, the automated assistant is also able to present certain notifications in a driving optimization format. For example, a "missed call" notification and/or a "unread text" notification can be presented at a display interface of a computing device with a larger font size and/or larger area, rather than with a font size and/or area that predicts use without the user driving. Alternatively or additionally, the selectable suggestions can also be proactively presented in a driving optimization format to provide shortcuts that predict content and/or applications that a user is likely to access in the environment. For example, the driving optimized selectable suggestions can correspond to podcasts that the user prefers to hear while driving. Even though the user may not select the selectable assistant GUI element, the automated assistant is still able to present the selectable suggestions in a driving optimization format.
In some implementations, the user may provide input to remove selectable assistant GUI elements from a display interface of the computing device when determining or predicting that the user is traveling in the vehicle. In response, the automated assistant may operate in a light driving optimization mode in which automated assistant interactions may perform driving optimization, but other features of the computing device may exhibit their original characteristics. For example, in a light driving optimization mode, the home screen of the computing device may not be presented in a driving optimization format and may not include optional assistant GUI elements. Alternatively or additionally, in the light driving optimization mode, other applications besides the automated assistant can exhibit different characteristics than other characteristics exhibited in the driving optimization mode. However, in the light driving optimization mode, the automated assistant can provide driving optimization notifications, responses, and/or other content to the user to mitigate hazards experienced when interacting with the computing device while traveling in the vehicle.
In some embodiments, when the user is not driving, the user can choose to disable the driving optimization mode and/or the light driving optimization mode-even if the automated assistant predicts that the user is driving. The indication that the user is not driving, as provided explicitly by the user, can be used to further train the automated assistant. For example, in response to a user explicitly indicating that they are not traveling (e.g., when the user provides an input such as "Assistant, cancel driving mode (cancel driving mode)" or "Assistant, I am not driving"), one or more indications for predicting that the user is traveling in a vehicle (e.g., user accessing a particular application, calendar event, etc.) can be assigned a lower priority after prediction. Thereafter, the lower priority indication can affect a confidence score of a subsequent prediction related to whether the user is traveling in the vehicle. In this way, when such indicators appear, the user may not need to repeat input to deactivate the driving mode, thereby reducing the amount of user input handled by the automated assistant and also reserving computing resources for the automated assistant.
The above description is provided as an overview of some embodiments of the present disclosure. Further description of these and other embodiments are described in more detail below.
Other embodiments may include a non-transitory computer-readable storage medium storing instructions executable by one or more processors, e.g., a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), and/or a Tensor Processing Unit (TPU), to perform methods such as one or more of the methods described above and/or elsewhere herein. Other embodiments may include a system of one or more computers comprising one or more processors operable to execute stored instructions to perform methods such as one or more methods described above and/or elsewhere herein.
It should be understood that all combinations of the foregoing concepts and additional concepts described in more detail herein are considered a part of the subject matter disclosed herein. For example, all combinations of claimed subject matter appearing at the end of this disclosure are considered part of the subject matter disclosed herein.
Drawings
1A, 1B, 1C, and 1D illustrate views of a user traveling in a vehicle, where the user's personal computing device provides access to an automated assistant that can adjust functions based on whether the user is predicted to be traveling in the vehicle.
FIG. 2 illustrates a system that provides an automated assistant that facilitates certain driving optimization functions based on confidence that a predicted user is traveling in a vehicle.
Fig. 3 illustrates a method for actively operating an automatic assistant in a assistant driving optimization mode and providing additional driving optimization features when a user explicitly selects to operate in a driving optimization mode.
FIG. 4 is a block diagram of an example computer system.
Detailed Description
1A, 1B, 1C, and 1D illustrate views 100, 120, 140, and 160 of a user 102 traveling in a vehicle 108, where the user's personal computing device 104 provides access to an automated assistant that adjusts functions based on whether the user 102 is predicted to be traveling in the vehicle. For example, users 102 can enter a vehicle 108 with their computing devices 104, which can provide access to an automated assistant. When the computing device 104 is within range of a vehicle computing device of the vehicle 108 for connection with the vehicle computing device, the automated assistant can predict that the user 102 is traveling in the vehicle 108. For example, the computing device 104 can be connected to a vehicle computing device via a wireless or wired communication protocol. When the automated assistant predicts that the user 102 is traveling or is about to travel in the vehicle 108 based on the connection, the automated assistant can cause the selectable assistant GUI element 110 to be presented at the display interface 106 of the vehicle 108. When the user 102 selects the selectable assistant GUI element 110, the automated assistant can cause the assistant driving mode GUI to be presented at a display interface of the computing device 104 and/or a display interface of the vehicle computing device. In some embodiments, the automated assistant can operate automatically according to the assistant driving mode when the automated assistant predicts that the user 102 is traveling or is about to travel based on the aforementioned connection and/or user initiated access to the navigation application.
For example, and as illustrated in view 120 of fig. 1B, before user 102 selects selectable assistant GUI element 110, the automatic assistant and/or one or more other applications of the computing device can operate according to an assistant driving mode. When the automated assistant operates according to the assistant driving mode, presentation of notifications and/or other operations can be performed in a manner optimized for driving. For example, when the user 102 receives an incoming message and the computing device 104 is operating according to the assistant driving mode, the incoming message can be used to generate the assistant suggestion 124. The assistant advice 124 can be presented by one or more characteristics that would be different if the assistant advice 124 were presented in a non-driving mode. For example, the one or more characteristics can include a size of text of the assistant suggestion 124, a style of the text, whether there are voice characteristics corresponding to the assistant suggestion 124 (e.g., an automatic assistant that audibly presents a message notification), an area in the display interface 106 occupied by the assistant suggestion 124, and/or any other characteristics that can be associated with an application notification.
In some implementations, a countdown timer can be presented at the display interface 106 to indicate when the selectable assistant GUI element 110 is to be removed from the display interface 106. In some embodiments, a countdown timer can be presented at the display interface 106 to indicate when the assistant driving mode GUI 144 will be presented at the display interface 106 (assuming the user 102 does not dismiss the selectable assistant GUI element 110 before the expiration of the countdown timer). In some embodiments, the action performed when the duration of the countdown timer expires can be based on a predicted confidence score that the user is traveling. For example, expiration of the timer can cause the assistant driving mode GUI 144 to be presented when the confidence score meets a confidence score threshold. However, when the confidence score does not meet the confidence score threshold, expiration of the timer may not cause the assistant driving mode GUI 144 to be presented, but the automated assistant may still operate in the driving optimization mode.
In some implementations, the duration of the countdown timer can be based on a confidence and/or a confidence score related to a prediction that the user 102 is traveling in the vehicle. For example, the duration may be longer when the prediction has a greater confidence than when the prediction that the user 102 is traveling has a lesser confidence. In some embodiments, the automated assistant may operate according to an assistant driving mode, whether the user 102 selects the selectable assistant GUI element 110 for the duration of the countdown timer. However, the user 102 may not select the selectable assistant GUI element 110 for a threshold duration and, thus, does not cause the assistant driving mode GUI 144 to be presented at the display interface 106 for assistant. In some implementations, the score can be compared to a score threshold to determine whether to automatically operate in the assistant driving mode or to wait for the user 102 to select the selectable assistant GUI element 110 before operating in the assistant driving mode. For example, when the score meets a score threshold, the automated assistant and/or computing device 104 may operate according to an assistant driving mode. Otherwise, when the score does not meet the score threshold, the automated assistant can cause the selectable assistant GUI element 110 to be presented at the display interface 106. In some implementations, the score threshold can be set by the user 102, the automated assistant, and/or any other application that can be associated with the automated assistant.
Upon receiving a selection of the selectable assistant GUI element 110 from the user 102 (e.g., through the user's hand 126 and/or other input), the automatic assistant can cause an assistant driving mode GUI 144 to be presented at the display interface 106 as illustrated in fig. 1B. One or more characteristics and/or features of the assistant driving mode GUI 144 can be based on data used to predict that the user 102 is traveling in the vehicle 108. For example, when the user 102 is predicted to be traveling based on the connection between the computing device 104 and the vehicle computing device, the assistant driving mode GUI 144 can be presented with the navigation interface 146 and other suggested content. Other suggestion content can be, but is not limited to, an assistant suggestion 148 (e.g., a first selectable element) for opening a media stream application and/or an assistant suggestion 150 (e.g., a second selectable element) for opening a messaging application. The navigation interface 146 can be presented with detailed information about the route predicted to be to the destination by the user 102. For example, the contextual data and/or other data available to the computing device 104 and/or the automated assistant can be processed to predict a destination that the user 102 may be traveling. When a particular destination is identified, and with prior permission of the user 102, a route from the current location of the user 102 to the predicted destination can be made available to the user 102 via the navigation interface 146. For example, and as illustrated in FIG. 1C, the user 102 can be predicted to be heading to "Ear-X-Y-Z" based on context data (e.g., time of day, recent interactions between the user 102 and the automated assistant, vehicle state data communicated to the automated assistant, application data that the automated assistant is permitted to access, etc.).
In some embodiments, the automated assistant is capable of causing the navigation interface to be presented with the assistant driving mode GUI 144 without initially causing the selectable assistant GUI element 110 to be presented at the display interface 106 to assistant. For example, when the confidence score of the travel prediction meets a first score threshold, the automated assistant can cause the selectable assistant GUI element 110 to be presented at the display interface 106. However, when the confidence score meets the second score threshold, the automated assistant can cause the assistant driving mode GUI 144 of fig. 1C to be presented without initially presenting the selectable assistant GUI element 110. In some implementations, the confidence score that satisfies the second score threshold can be based on a determination that the computing device 104 is connected to the vehicle computing device when the navigation application is accessed at the computing device 104. Alternatively, the first score threshold can be met when the navigation application is accessed at the computing device 104 and/or one or more sensors of the computing device 104 indicate (e.g., based on changes in speed, acceleration, altitude, etc.) that the user 102 is traveling in the vehicle.
When the automated assistant operates according to the assistant driving mode, the inputs of the automated assistant can be processed based on the current context in which the user 102 is traveling in the vehicle 108. For example, when the user 102 is not traveling in the vehicle 108 and is in their home, a spoken utterance 142 to an automated Assistant, such as "Assistant, doo-Wop Store" can be treated as a search of a website or defined internet. However, when the user 102 is predicted to be traveling in the vehicle 108, the spoken utterance 142 can be processed based on the context in which the user 102 is traveling in the vehicle 108. For example, when the user 102 provides the spoken utterance 142, the automated assistant can process the spoken utterance 142 as a request to find a direction of a particular destination specified in the content of the spoken utterance 142. The results of this processing can be presented in the assistant driving mode GUI 144, as illustrated in view 160 of fig. 1D.
In some cases, the user 102 may wish to not see the assistant driving mode GUI 144 any more. To dismiss the assistant driving mode GUI 144, the user 102 can provide input to the automated assistant and/or the computing device 104 to cause the assistant driving mode GUI 144 to be removed from the display interface 106 (e.g., by sliding the display interface 106 or "dismiss"). In response, the automated assistant can cause the display interface 106 to revert from the content displayed at fig. 1C to the content displayed at fig. 1A. In other words, in response to the user 102 dismissing the assistant driving mode GUI 144 presented at fig. 1C, the automated assistant is able to replace the assistant driving mode GUI 144 with the selectable assistant GUI element 110, as illustrated in fig. 1A.
In some implementations, the user 102 is also able to select the assistant suggestion 150 when the automatic assistant is operating according to the assistant driving mode, as illustrated in the view 140 of fig. 1C. In response, the area of the display interface 106 occupied by the content of the assistant suggestion 150 can be expanded to a larger area. Alternatively or additionally, additional content associated with the assistant suggestion 150 can be presented in response to the user 102 selecting the assistant suggestion 150. For example, and as illustrated in fig. 1D, the assistant suggestion 150 can be extended to include content characterizing multiple messages received from multiple others. The user 102 can select one of the messages, as illustrated in fig. 1D, to cause the automated assistant to present the content of the message in a manner optimized for safer driving. For example, in response to user 102 clicking on a GUI element corresponding to a message from "Jane", the automated assistant can audibly present output 162, such as "Jane say: ' do i need to take something (Do I need to bring anything)? ' so that the user 102 does not have to read the message from the display interface 106.
Fig. 2 illustrates a system 200 that provides an automated assistant that facilitates certain driving optimization functions based on a confidence (i.e., confidence score) that a predicted user is traveling in a vehicle. The automated assistant 204 is capable of operating as part of an assistant application provided at one or more computing devices, such as the computing device 202 and/or a server device. The user can interact with the automated assistant 204 via assistant interface(s) 220, which can be a microphone, a camera, a touch screen display, a user interface, and/or any other device that can provide an interface between the user and an application. For example, the user can initialize the automated assistant 204 by providing verbal, textual, and/or graphical input to the assistant interface 220 to cause the automated assistant 204 to initialize one or more actions (e.g., providing data, controlling peripheral devices, accessing agents, generating inputs and/or outputs, etc.).
Alternatively, the automated assistant 204 can be initialized based on processing of the context data 236 using one or more trained machine learning models. The context data 236 can characterize one or more features of an environment accessible to the automated assistant 204 and/or one or more features of a user predicted to be intended to interact with the automated assistant 204 (in the event of a user pre-permission). The computing device 202 can include a display device, which can be a display panel that includes a touch interface for receiving touch inputs and/or gestures to allow a user to control the application 234 of the computing device 202 via the touch interface. In some implementations, the computing device 202 can lack a display device, thereby providing audible user interface output, rather than graphical user interface output. In addition, the computing device 202 can provide a user interface, such as a microphone, for receiving spoken natural language input from a user. In some embodiments, the computing device 202 can include a touch interface and can include no cameras, but can optionally include one or more other sensors.
The computing device 202 and/or other third party client devices can communicate with the server device over a network such as the internet. In addition, computing device 202 and any other computing devices can communicate with each other over a Local Area Network (LAN), such as a Wi-Fi network. The computing device 202 is able to offload computing tasks to a server device in order to save computing resources at the computing device 202. For example, the server device can host the automated assistant 204, and/or the computing device 202 can transmit input received at the one or more assistant interfaces 220 to the server device. However, in some implementations, the automated assistant 204 can be hosted at the computing device 202, and various processes that can be associated with automated assistant operations can be performed at the computing device 202.
In various implementations, all or less than all aspects of the automated assistant 204 can be implemented on the computing device 202. In some of these embodiments, aspects of the automated assistant 204 are implemented via the computing device 202 and are capable of interfacing with a server device that is capable of implementing other aspects of the automated assistant 204. The server device is capable of serving multiple users and their associated automated assistant applications, optionally via multiple threads. In implementations in which all or less than all aspects of the automated assistant 204 are implemented via the computing device 202, the automated assistant 204 can be an application separate from (e.g., installed on top of) the operating system of the computing device 202-or can alternatively be implemented directly by (e.g., considered to be an application of, but integrated with) the operating system of the computing device 202.
In some implementations, the automated assistant 204 can include an input processing engine 206 that can employ a plurality of different modules to process inputs and/or outputs of the computing device 202 and/or the server device. For example, input processing engine 206 can include a speech processing engine 208 that can process audio data received at assistant interface 220 to identify text embodied in the audio data. Audio data may be transmitted from, for example, computing device 202 to a server device in order to conserve computing resources at computing device 202. Additionally or alternatively, the audio data can be processed exclusively at the computing device 202.
The process for converting audio data to text can include a speech recognition algorithm that can employ a neural network and/or statistical model to identify the set of audio data corresponding to the word or phrase. Text converted from the audio data can be parsed by the data parsing engine 210 and can be available to the automated assistant 204 as text data that can be used to generate and/or identify command phrase(s), intent(s), action(s), slot value(s), and/or any other content specified by a user. In some embodiments, output data provided by the data parsing engine 210 can be provided to the parameter engine 212 to determine whether a user provides input corresponding to a particular intent, action, and/or routine that can be performed by the automated assistant 204 and/or an application or agent that can be accessed via the automated assistant 204. For example, the assistant data 238 can be stored at the server device and/or the computing device 202 and can include data defining one or more actions that can be performed by the automated assistant 204, as well as parameters needed to perform the actions. The parameter engine 212 can generate one or more parameters for intent, action, and/or slot values and provide the one or more parameters to the output generation engine 214. The output generation engine 214 can use one or more parameters to communicate with the assistant interface 220 to provide output to a user and/or with one or more applications 234 to provide output to the one or more applications 234.
In some implementations, the automated assistant 204 can be an application that can be installed on top of the operating system of the computing device 202, and/or can itself form part (or all) of the operating system of the computing device 202. The automated assistant application includes and/or has access to on-device speech recognition, on-device natural language understanding, and on-device fulfillment. For example, on-device speech recognition can be performed using an on-device speech recognition module that processes audio data (detected by microphone (s)) using an end-to-end speech recognition machine learning model stored locally at computing device 202. On-device speech recognition generates recognized text of a spoken utterance (if any) present in the audio data. Also, for example, on-device Natural Language Understanding (NLU) can be performed using an on-device NLU module that processes recognized text and optionally contextual data generated using on-device speech recognition to generate NLU data.
The NLU data can include intent(s) corresponding to the spoken utterance and, optionally, parameter(s) of intent(s) (e.g., slot values). The on-device fulfillment can be performed using an on-device fulfillment module that utilizes NLU data (from the on-device NLU) and optionally other local data to determine action(s) to take to parse intent(s) (and optionally parameters of intent) of the spoken utterance. This can include determining local and/or remote responses (e.g., answers) to the spoken utterance, interactions(s) with locally installed application(s) based on the spoken utterance, commands transmitted to internet of things (IoT) device(s) based on the spoken utterance (either directly or via a corresponding remote system), and/or other parsing actions performed based on the spoken utterance. The on-device fulfillment can then initiate local and/or remote execution/enforcement of the determined action(s) for parsing the spoken utterance.
In various embodiments, at least remote speech processing, remote NLU, and/or remote fulfillment can be selectively utilized. For example, the identified text can be at least selectively transmitted to remote automated assistant component(s) for remote NLU and/or remote fulfillment. For example, the recognized text can optionally be transmitted for remote execution in parallel with or in response to an on-device NLU and/or on-device performance failure. However, on-device speech processing, on-device NLU, on-device fulfillment, and/or on-device execution can be at least prioritized due to the reduced delay they provide in parsing the spoken utterance (since a client-server round trip is not required to parse the spoken utterance). Furthermore, the on-device functionality can be the only functionality available without or with limited network connectivity.
In some embodiments, the computing device 202 can include one or more applications 234 that can be provided by a third party entity that is different from the entity that provided the computing device 202 and/or the automated assistant 204. The device state engine of the automated assistant 204 and/or the computing device 202 can access the application data 230 to determine one or more actions that can be performed by the one or more applications 234, as well as a state of each of the one or more applications 234 and/or a state of a respective device associated with the computing device 202. The device state engine of the automated assistant 204 and/or the computing device 202 can access the device data 232 to determine one or more actions that can be performed by the computing device 202 and/or one or more devices associated with the computing device 202. Further, application data 230 and/or any other data (e.g., device data 232) can be accessed by the automated assistant 204 to generate context data 236 that can characterize the context in which a particular application 234 and/or device is executing and/or the context in which a particular user is accessing the computing device 202, accessing the application 234, and/or any other device speaking module.
When one or more applications 234 are executing at the computing device 202, the device data 232 can characterize the current operating state of each application 234 executing at the computing device 202. Further, the application data 230 can characterize one or more features of the executing application 234, such as content of one or more graphical user interfaces presented in the direction of the one or more applications 234. Alternatively or additionally, the application data 230 can characterize an action pattern that can be updated by the respective application and/or by the automated assistant 204 based on the current operating state of the respective application. Alternatively or additionally, one or more action patterns for one or more applications 234 can remain static, but can be accessed by the application state engine to determine appropriate actions to initialize via the automated assistant 204.
The computing device 202 can further include an assistant invocation engine 222 that can use one or more trained machine learning models to process application data 230, device data 232, context data 236, and/or any other data accessible to the computing device 202. The assistant invocation engine 222 can process the data to determine whether to wait for the user to explicitly speak the invocation phrase to invoke the automated assistant 204, or to treat the data as indicating the intent to invoke the automated assistant by the user-rather than requiring the user to explicitly speak the invocation phrase. For example, one or more trained machine learning models can be trained using instances of training data using scenarios based on users in environments where multiple devices and/or applications exhibit various operating states. An instance of training data can be generated to capture training data characterizing a context in which a user invokes an automatic assistant and other contexts in which a user does not invoke an automatic assistant. When training one or more trained machine learning models from these instances of training data, the assistant invocation engine 222 can cause the automated assistant 204 to detect or limit the detection of spoken invocation phrases from the user based on characteristics of the context and/or environment. Additionally or alternatively, the assistant invocation engine 222 can cause the automated assistant 204 to detect or limit detection of one or more assistant commands from a user based on characteristics of the context and/or environment.
In some implementations, the system 200 can include a travel prediction engine 216 that can generate predictions as to whether a user is traveling via a transportation (e.g., vehicle). The travel prediction engine 216 can generate predictions as to whether the user is traveling based on the application data 230, the device data 232, the context data 236, and/or any other data accessible to the system 200. In some implementations, the travel prediction engine 216 can generate predictions based on whether the computing device 202 is in communication with a vehicle computing device (e.g., via bluetooth or other protocol), whether a user is accessing a navigation application, and/or whether movement of the user and/or computing device is indicative of vehicle travel. The data characterizing the predictions can be communicated to the prediction score engine 218 of the system 200.
The predictive score engine 218 can generate a score that indicates the confidence of the prediction that the user is traveling via the vehicle. For example, the score indicates a higher confidence when the computing device 202 is in communication with the vehicle computing device and when the user is accessing the navigation application. Further, the score can indicate a relatively low confidence when the computing device is not in communication with the vehicle computing device, but the user is accessing the navigation application. Alternatively or additionally, the score can indicate a relatively low confidence when predicting data based on the motion of the user and/or one or more sensors from the computing device 202.
The driving mode GUI engine 226 can process the score from the predictive score engine 218 to determine whether to operate the computing device 202 and/or the automated assistant 204 in the assistant driving mode. Alternatively or in addition, the driving pattern GUI engine 226 can process the score to determine whether to cause the selectable assistant GUI element to be presented at the interface of the computing device 202 to assistant. For example, the score can be compared to a score threshold for automatically initializing the assistant driving mode. When the score threshold is met, the driving mode GUI engine 226 can cause the automated assistant 204 to operate according to the assistant driving mode and also cause selectable assistant GUI elements to be presented at the display interface of the computing device 202. When the score threshold is not met, the driving mode GUI element 226 can cause selectable assistant GUI elements to be presented at a display interface of the computing device 202. Thereafter, the driving mode GUI engine 226 can wait for the user to select a selectable assistant GUI element before having the automated assistant 204 operate according to the assistant driving mode.
In some implementations, the GUI timer engine 224 of the system 200 can cause time to be presented at the display interface of the computing device 202 to indicate the duration that the selectable assistant GUI element is to be presented. When the user selects that the selectable assistant GUI element having the duration is not selected, the GUI timer engine 224 can indicate to the driving mode GUI engine 226 that the user has not selected the selectable assistant GUI element for the duration. In response, the driving mode GUI engine 226 can cause the selectable assistant GUI elements to be removed from the display interface. In some embodiments, the duration can be selected by the GUI timer engine 224 based on the score generated by the predictive score engine 218. For example, the duration of the timer can be longer (e.g., 20 seconds) for a score indicating a higher confidence, and shorter (e.g., 5 seconds) for a score indicating a lower confidence.
Fig. 3 illustrates a method 300 for actively operating an automatic assistant in a assistant driving optimization mode and providing additional driving optimization features when a user explicitly selects to operate in a driving optimization mode. The method 300 can be performed by one or more computing devices, applications, and/or any other apparatus or module capable of being associated with an automated assistant. The method 300 can include an operation 302: it is determined whether the user is predicted to be traveling along the predicted route. In some embodiments, the determination at operation 302 can be based on one or more data sources, such as, but not limited to, one or more applications, sensors, and/or devices. For example, a determination that a user is traveling can be based on a computing device (e.g., a cellular telephone) connecting to a vehicle computing device via a wireless communication protocol and/or the user initializing a navigation application to navigate to a particular destination. In some implementations, the determination at operation 302 can indicate that the user is traveling in a manner that indicates that the user is riding a vehicle based on one or more sensors of the computing device and/or the vehicle computing device.
When the user is predicted to be traveling, the method 300 can proceed from operation 302 to operation 304. Otherwise, the application and/or device executing the method 300 can continue to determine whether the user is predicted to be traveling and/or operate the automated assistant in a non-driving mode. Operation 304 can include generating a predictive score that characterizes a confidence level of a prediction that the user is traveling. For example, the confidence score may be higher when the user's computing device is in communication with the vehicle computing device than when the computing device is not in communication with the vehicle computing device. The method 300 can proceed from operation 304 to operation 306: it is determined whether the predictive score meets a threshold. When the predictive score meets the threshold, the method 300 can proceed from operation 306 to operation 310. Otherwise, when the prediction threshold is not met, the method 300 can proceed from operation 306 to operation 308.
Operation 308 can include causing the selectable assistant GUI element to be presented at a display interface of a computing device (e.g., a portable computing device separate from the vehicle computing device). The selectable assistant GUI element can be, for example, a selectable icon including an automobile graphic to indicate that selecting the selectable assistant GUI element will cause the automatic assistant to operate in a driving optimization mode (i.e., assistant driving mode). In some embodiments, the selectable assistant GUI element can be presented more prominently when the prediction score indicates a higher confidence in the prediction that the user is traveling, and can be presented less prominently when the prediction score indicates a lower confidence. From operation 308, the method 300 may proceed to operation 312.
Operation 312 can include determining whether the user has selected the selectable assistant GUI element. Upon determining that the user has not selected the selectable assistant GUI element, the method 300 can proceed from operation 312 to operation 318. Operation 318 can include causing the automated assistant to operate according to an assistant driving mode. The assistant driving mode can be a mode in which the automated assistant presents certain outputs and/or processes certain inputs in a manner optimized for driving and/or promoting safety. For example, a notification of an incoming message can be presented at a display interface that has a text size that is larger than another font size that would be used for the notification if the user were not predicted to be traveling. Alternatively or additionally, the notification of the incoming message can be presented at a display interface in a region of the display interface that is larger than another region that would be used for notification without predicting that the user is traveling.
In some embodiments, when the automated assistant operates according to the assistant driving mode, input to the automated assistant can be processed using at least local data characterizing the user's geographic location. For example, when a user provides a user such as "Assistant, how much (assuredly) the price of gasoline? "when the automated assistant is operating in the assistant driving mode, the input can be processed using data corresponding to the user's location. For example, the automated assistant can generate a response output using data related to the user's current location, such as "at a Gas station 0.25miles from your location, the price of gasoline is $2.35per gallon (Gas $2.35per gallon at the Station that is 0.25miles from your location)". However, when the user provides this input when the automated assistant is not operating in the assistant driving mode, the automated assistant can provide another response output, such as "today's price of Crude oil is $70per barrel today". Such other corresponding outputs can be based on one or more data sources that may not include or prioritize local data.
In some implementations, operation 318 can be bypassed when the prediction score does not meet another prediction threshold for bypassing the start assistant driving mode. In this way, the automated assistant may optionally wait for the user to select a selectable assistant GUI element before operating according to the assistant driving mode when another prediction threshold is not met. From operation 318, the method 300 can proceed to optional operation 320: it is determined whether the user selects the selectable assistant GUI element before the threshold duration has occurred and/or whether the user dismisses the selectable assistant UI element. In some implementations, the threshold duration can be based on a fraction of predictions that the user is traveling in the vehicle. The score can indicate a confidence of a prediction that the user is traveling in the vehicle and/or driving the vehicle. For example, the threshold duration may be longer for higher confidence scores and shorter for lower confidence scores. This can allow the user more time to activate the assistant driving mode via the selectable assistant GUI element when it is more likely to predict that the user is traveling in the vehicle. When the threshold duration has occurred without the user selecting the selectable assistant GUI element, the method 300 can optionally proceed from operation 320 to operation 310, or optionally from operation 320 to operation 316. Otherwise, when it is determined that the user selects the selectable assistant GUI element at operation 312, the method 300 can proceed to operation 310.
Operation 310 can include causing an assistant driving mode GUI to be presented at the display interface based on the predicted score that the user is traveling. For example, when the score meets a threshold score, an assistant driving mode GUI can be presented with a first portion including a navigation interface and a second portion including one or more selectable suggestions. Alternatively or additionally, the assistant driving mode GUI can be presented initially using the navigation interface or one or more selectable suggestions when the score does not meet the threshold score. In some cases, the score can satisfy a threshold score when an antenna or sensor of the computing device communicates with the vehicle computing device via a wireless communication protocol. Alternatively or additionally, the score can satisfy a threshold score when the computing device is in communication with the vehicle computing device and the user is accessing the navigation application via the computing device and/or the vehicle computing device. In some cases, the score may not satisfy the threshold score when the computing device is not in communication with the vehicle computing device, but the user is accessing the navigation application via the computing device.
In some embodiments, the method 300 can proceed from operation 310 to operation 314, and operation 314 can include causing the automated assistant to operate according to an assistant driving mode. The characteristics of the automated assistant and/or the content presented by the computing device can be adjusted based on the predicted score that the user is traveling in the vehicle. From operation 314, the method 300 can proceed to optional operation 316, which operation 316 can include causing the optional assistant GUI element to be removed from the display interface. Thereafter, the method 300 can proceed from either operation 314 or operation 316 to operation 302 for determining whether the user is predicted to be traveling in the vehicle. When it is predicted that the user is no longer traveling in a vehicle (e.g., car, truck, airplane, bicycle, motorcycle, boat, and/or any other means of transportation), the automated assistant can cease operation according to the assistant driving mode. Alternatively, when the user releases or swipes the assistant driving mode GUI, the method 300 can proceed from operation 314 or operation 316 to operation 308. In this way, by dismissing the assistant driving mode GUI and causing the selectable assistant GUI elements to be presented again, the user has a "shortcut" to revisit the assistant driving mode GUI during their travel.
Fig. 4 is a block diagram 400 of an example computer system 410. Computer system 410 typically includes at least one processor 412 that communicates with a number of peripheral devices via a bus subsystem 414. These peripheral devices may include: a storage subsystem 424, including, for example, a memory 425 and a file storage subsystem 426; a user interface output 420; a user interface input 422; and a network interface subsystem 416. Input and output devices allow user interaction with computer system 410. The network interface subsystem 416 provides an interface to external networks and couples to corresponding interface devices in other computer systems.
User interface input devices 422 may include a keyboard, a pointing device such as a mouse, trackball, touch pad, or tablet, a scanner, a touch screen incorporated into a display, an audio input device such as a voice recognition system, a microphone, and/or other types of input devices. In general, use of the term "input device" is intended to include all possible types of devices and ways to input information into computer system 410 or onto a communication network.
The user interface output device 420 may include a display subsystem, a printer, a facsimile machine, or a non-visual display such as an audio output device. The display subsystem may include a Cathode Ray Tube (CRT), a flat panel device such as a Liquid Crystal Display (LCD), a projection device, or some other mechanism for producing a viewable image. The display subsystem may also provide for a non-visual display, such as via an audio output device. In general, use of the term "output device" is intended to include all possible types of devices and ways to output information from computer system 410 to a user or another machine or computer system.
Storage subsystem 424 stores programming and data structures that provide the functionality of some or all of the modules described herein. For example, the storage subsystem 424 may include logic for performing selected aspects of the method 300 and/or implementing the system 200, the computing device 104, the vehicle computing device, the automated assistant, and/or any other application, device, apparatus, and/or module discussed herein.
These software modules are typically executed by processor 414 alone or in combination with other processors. Memory 425 used in storage subsystem 424 can include a number of memories including a main Random Access Memory (RAM) 430 for storing instructions and data during program execution and a Read Only Memory (ROM) 432 for storing fixed instructions. File storage subsystem 426 may provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive, and associated removable media, CD-ROM drive, optical disk drive, or removable media cartridge. Modules implementing the functionality of certain embodiments may be stored by file storage subsystem 426 in storage subsystem 424, or in other machines accessible by processor(s) 414.
Bus subsystem 412 provides a mechanism for allowing the various components and subsystems of computer system 410 to communicate with each other as intended. Although bus subsystem 412 is shown schematically as a single bus, alternative embodiments of the bus subsystem may use multiple buses.
Computer system 410 can be of various types including a workstation, a server, a computing cluster, a blade server, a server farm, or any other data processing system or computing device. Because of the ever-changing nature of computers and networks, the description of computer system 410 depicted in FIG. 4 is intended only as a specific example for purposes of illustrating some embodiments. Many other configurations of computer system 410 are possible with more or fewer components than the computer system depicted in FIG. 4.
Where the systems described herein collect personal information about a user (or what is generally referred to herein as a "participant"), or may utilize the personal information, the user may be provided with an opportunity to control whether programs or features collect user information (e.g., information about the user's social network, social behavior or activity, profession, user's preferences, or the user's current geographic location), or whether and/or how to receive content from a content server that may be more relevant to the user. In addition, certain data may be processed in one or more ways prior to storage or use such that personal identification information is purged. For example, the identity of the user may be processed such that no personally identifiable information of the user can be determined, or the geographic location of the user may be summarized (such as to a city, zip code, or state level) with the geographic location information obtained such that a particular geographic location of the user cannot be determined. Thus, the user may control how information is collected about the user and/or how information is used.
Although several embodiments have been described and illustrated herein, various other ways and/or structures for performing functions and/or obtaining results and/or one or more of the advantages described herein may be utilized and each of these variations and/or modifications are considered to be within the scope of the embodiments described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, the embodiments may be practiced otherwise than as specifically described and claimed. Embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, any combination of two or more such features, systems, articles, materials, kits, and/or methods is included within the scope of the present disclosure.
In some implementations, a method implemented by one or more processors is set forth as including, for example, operations of determining, at a computing device, that a user of the computing device is traveling in a vehicle, wherein the computing device provides access to an automated assistant and is separate from a vehicle computing device of the vehicle. In some embodiments, the method can further include, when the user is traveling in the vehicle, predicting based on: the navigation interface initiates access via the computing device or a user of the vehicle computing device, and when the computing device communicates with the vehicle computing device: an assistant driving mode Graphical User Interface (GUI) is automatically presented at a display interface of a computing device, wherein the assistant driving mode GUI includes a navigation interface and content that predicts a user will access when the user is traveling in a vehicle. In some implementations, the method can further include, when the prediction that the user is traveling in the vehicle is based on the computing device communicating with the vehicle computing device without accessing the navigation interface via the computing device: causing the selectable assistant GUI element to be presented at a display interface of the computing device, wherein selection of the selectable assistant GUI element causes the navigation interface to be presented at the display interface of the computing device.
In some implementations, the method can further include, when the user is predicted to be traveling in the vehicle based on one or more sensors of the computing device indicating that the user is traveling in the vehicle, and the computing device is not in communication with the vehicle computing device: the automated assistant is operated in an assistant driving mode in which specific user inputs of the automated assistant are processed using local data characterizing the user's geographic location or the predicted route of the vehicle. In some implementations, causing the selectable assistant GUI element to be presented at a display interface of the computing device includes: the countdown timer is caused to be presented and initialized via the computing device, wherein the selectable assistant GUI element is removed from the display interface in response to the user not selecting the selectable assistant GUI element before expiration of the countdown timer. In some implementations, wherein causing the selectable assistant GUI element to be presented at the display interface of the computing device includes: the countdown timer is caused to be presented and initialized via the computing device, wherein when the user does not select the selectable assistant GUI element before expiration of the countdown timer, the automated assistant operates according to an assistant driving mode in which specific user inputs of the automated assistant are processed using local data characterizing a geographic location of the user or a predicted route of the vehicle. In some embodiments, the content of the assistant driving mode GUI includes a first selectable element corresponding to the messaging application and a second selectable element corresponding to the media stream application.
In other implementations, a method implemented by one or more processors is set forth that includes determining, for example, at a computing device, a predicted score that a user is traveling in a vehicle, wherein the computing device provides access to an automated assistant and is separate from a vehicle computing device of the vehicle. In some embodiments, the method can further comprise, when the predicted score meets a score threshold: an assistant driving mode Graphical User Interface (GUI) is automatically presented at a display interface of a computing device, wherein the assistant driving mode GUI includes a navigation interface and content that predicts a user will access when the user is traveling in a vehicle. In some embodiments, the method can further comprise, when the predicted score does not meet the score threshold: causing a selectable assistant GUI element to be presented at a display interface of the computing device, wherein selection of the selectable assistant GUI element causes the automatic assistant to operate according to an assistant driving mode in which a navigation interface is presented at the display interface.
In some implementations, the selectable assistant GUI elements to be presented at the display interface include: the selectable assistant GUI element is caused to be presented above a home screen or lock screen that is presented at a display interface of the computing device. In some embodiments, determining the predicted score that the user is traveling in the vehicle comprises: the predicted score is generated based on data generated using one or more sensors of the computing device or other data available at another computing device associated with the user. In some embodiments, the method can further include, when the assistant driving mode GUI is presented at the display interface, the navigation interface being presented in a larger area of the display interface when the score meets the score threshold than when the navigation interface is presented and the score does not meet the score threshold.
In some embodiments, the method can further include, when the assistant driving mode GUI is presented at the display interface, the text content of the assistant driving mode GUI being presented greater when the score meets the score threshold than when the text content is presented and the score does not meet the score threshold. In some embodiments, the method can further include, when the predicted score does not meet the score threshold, causing the selectable assistant GUI element to be presented at a display interface of the computing device comprises: causing a display interface or other interface of the computing device to present an indication that the selectable assistant GUI element is to be removed from the display interface after a threshold duration, wherein the threshold duration is based on a predicted fraction of what the user is traveling in the vehicle. In some implementations, the predicted score is based on whether a network antenna of the computing device facilitates wireless communication between the computing device and the vehicle computing device.
In yet other embodiments, a method implemented by one or more processors is set forth as comprising operations such as determining, at a computing device, a prediction that a user is traveling in a vehicle based on data generated using one or more sensors of the computing device, wherein the computing device provides access to an automated assistant and is separate from a vehicle computing device of the vehicle. The method can further include causing optional assistant Graphical User Interface (GUI) elements to be presented at a display interface of the computing device based on determining that the predicted user is traveling in the vehicle. The method can further include, upon receiving a selection of a selectable assistant GUI element to activate a driving mode of the automatic assistant: in response to receiving the selection of the selectable assistant GUI elements, causing an assistant driving mode GUI to be presented at a display interface of the computing device, wherein characteristics of one or more selectable GUI elements presented at the assistant driving mode GUI are selected based on a predicted fraction of what the user is traveling in the vehicle.
In some implementations, causing the selectable assistant GUI element to be presented at the display interface includes: the selectable assistant GUI element is caused to be presented above a home screen or lock screen that is presented at a display interface of the computing device. In some embodiments, determining a prediction that a user is traveling in a vehicle comprises: the predicted score is generated based on whether the user is accessing the navigation application and whether the computing device is in communication with the vehicle computing device. In some embodiments, causing the assistant driving mode GUI to be presented at the display interface includes: a determination is made as to whether the score meets a score threshold for selecting an area of the user interface to be occupied by the assistant driving mode GUI, wherein the area is greater when the score meets the score threshold than when the score does not meet the score threshold.
In some embodiments, causing the assistant driving mode GUI to be presented at the display interface includes: determining whether the score meets a score threshold to select a text size of text corresponding to the assistant driving mode GUI, wherein the text size is greater when the score meets the score threshold than when the score does not meet the score threshold. In some implementations, the method can further include, based on determining a prediction that the user is traveling in the vehicle, causing the automated assistant to operate according to an assistant driving mode, wherein when the automated assistant operates according to the assistant driving mode, specific user inputs of the automated assistant are processed using local data characterizing a geographic location of the user or a predicted route of the user. In some embodiments, the method can further include, when a selection of a selectable assistant GUI element is not received to activate a driving mode of the automatic assistant: causing a display interface or other interface of the computing device to present an indication that the selectable assistant GUI element is to be removed from the display interface after a threshold duration, wherein the threshold duration is based on a predicted fraction of what the user is traveling in the vehicle. In some embodiments, determining a prediction that a user is traveling in a vehicle comprises: the predicted score is generated based on whether the computing device has received vehicle state data from the vehicle indicating that the user is traveling in the vehicle.

Claims (20)

1. A method implemented by one or more processors, the method comprising:
a prediction is determined at a computing device that a user of the computing device is traveling in a vehicle,
wherein the computing device provides access to an automated assistant and is separate from a vehicle computing device of the vehicle;
the prediction when the user is traveling in the vehicle is based on: a navigation interface via the computing device or a user-initiated access of the vehicle computing device, and when the computing device communicates with the vehicle computing device:
causing an assistant driving mode graphical user interface GUI to be automatically presented at a display interface of the computing device,
wherein the assistant driving mode GUI includes the navigation interface and content that predicts a user to access when the user is traveling in the vehicle; and
when the prediction that the user is traveling in the vehicle is based on the computing device communicating with the vehicle computing device without accessing the navigation interface via the computing device:
causing selectable assistant GUI elements to be presented at the display interface of the computing device,
Wherein selection of the selectable assistant GUI element causes the navigation interface to be presented at the display interface of the computing device.
2. The method of claim 1, further comprising:
when the user is predicted to be traveling in the vehicle based on one or more sensors of the computing device indicating that the user is traveling in the vehicle, the computing device is not in communication with the vehicle computing device:
the automated assistant is operated in an assistant driving mode in which specific user inputs of the automated assistant are processed using local data characterizing a geographic location of the user or a predicted route of the vehicle.
3. The method of claim 1 or claim 2, wherein causing the selectable assistant GUI element to be presented at the display interface of the computing device comprises:
causing a countdown timer to be presented and initialized via the computing device,
wherein the selectable assistant GUI element is removed from the display interface in response to the user not selecting the selectable assistant GUI element before the expiration of the countdown timer.
4. The method of claim 1 or claim 2, wherein causing the selectable assistant GUI element to be presented at the display interface of the computing device comprises:
causing a countdown timer to be presented and initialized via the computing device,
wherein when the user does not select the selectable assistant GUI element before the expiration of the countdown timer, the automated assistant operates according to an assistant driving mode in which specific user inputs of the automated assistant are processed using local data characterizing a geographic location of the user or a predicted route of the vehicle.
5. A method according to any preceding claim, wherein the content of the assistant driving mode GUI comprises a first selectable element corresponding to a messaging application and a second selectable element corresponding to a media streaming application.
6. A method implemented by one or more processors, the method comprising:
a predicted score that the user is traveling in the vehicle is determined at the computing device,
wherein the computing device provides access to an automated assistant and is separate from a vehicle computing device of the vehicle;
When the score of the prediction meets a score threshold:
causing an assistant driving mode graphical user interface GUI to be automatically presented at a display interface of the computing device,
wherein the assistant driving mode GUI includes a navigation interface that predicts content that the user will access when the user is traveling in the vehicle; and
when the score of the prediction does not meet the score threshold:
causing selectable assistant GUI elements to be presented at the display interface of the computing device,
wherein selection of the selectable assistant GUI element causes the automatic assistant to operate according to an assistant driving mode in which the navigation interface is presented at the display interface.
7. The method of claim 6, wherein causing the selectable assistant GUI element to be presented at the display interface comprises:
causing the selectable assistant GUI element to be presented above a home screen or a lock screen, the home screen or lock screen being presented at the display interface of the computing device.
8. The method of claim 6 or claim 7, wherein determining the predicted score that the user is traveling in the vehicle comprises:
The score of the prediction is generated based on data generated using one or more sensors of the computing device or other data available at another computing device associated with the user.
9. The method of any of claims 6-8, wherein, when the assistant driving mode GUI is presented at the display interface, the navigation interface is presented in a larger area of the display interface when the score meets the score threshold than when the navigation interface is presented and the score does not meet the score threshold.
10. The method of any of claims 6 to 9, wherein when the assistant driving mode GUI is presented at the display interface, the text content of the assistant driving mode GUI is presented larger when the score meets the score threshold than when text content is presented and the score does not meet the score threshold.
11. The method of any of claims 6-10, wherein causing the selectable assistant GUI element to be presented at the display interface of the computing device when the predicted score does not satisfy the score threshold comprises:
Causing the display interface or other interface of the computing device to present an indication that the selectable assistant GUI element is to be removed from the display interface after a threshold duration,
wherein the threshold duration is based on the fraction of the prediction that the user is traveling in the vehicle.
12. The method of any of claims 6-11, wherein the predicted score is based on whether a network antenna of the computing device facilitates wireless communication between the computing device and the vehicle computing device.
13. A method implemented by one or more processors, the method comprising:
determining at a computing device a prediction that a user is traveling in a vehicle based on data generated using one or more sensors of the computing device,
wherein the computing device provides access to an automated assistant and is separate from a vehicle computing device of the vehicle;
based on determining that the user is predicted to be traveling in the vehicle, causing selectable assistant graphical user interface GUI elements to be presented at a display interface of the computing device;
upon receiving a selection of the selectable assistant GUI element to activate a driving mode of the automatic assistant:
Responsive to receiving a selection of the selectable assistant GUI element, causing an assistant driving mode GUI to be presented at the display interface of the computing device,
wherein characteristics of one or more selectable GUI elements presented at the assistant driving mode GUI are selected based on the predicted score that the user is traveling in the vehicle.
14. The method of claim 13, wherein causing the selectable assistant GUI element to be presented at the display interface comprises:
the selectable assistant GUI element is caused to be presented above a home screen or a lock screen, the home screen or lock screen being presented at the display interface of the computing device.
15. The method of claim 13 or claim 14, wherein determining the prediction that the user is traveling in the vehicle comprises:
the score of the prediction is generated based on whether the user is accessing a navigation application and whether the computing device is in communication with the vehicle computing device.
16. The method of any of claims 13-15, wherein causing the assistant driving mode GUI to be presented at the display interface comprises:
Determining whether the score meets a score threshold to select an area of the display interface to be occupied by the assistant driving mode GUI,
wherein the region is larger when the score meets the score threshold than when the score does not meet the score threshold.
17. The method of any of claims 13-16, wherein causing the assistant driving mode GUI to be presented at the display interface comprises:
determining whether the score meets a score threshold to select a text size corresponding to text of the assistant driving mode GUI,
wherein the text size is greater when the score meets the score threshold than when the score does not meet the score threshold.
18. The method of any of claims 13 to 17, further comprising:
based on the prediction that the user is traveling in the vehicle, causing the automatic assistant to operate according to an assistant driving mode,
wherein, when the automated assistant operates according to the assistant driving mode, specific user inputs of the automated assistant are processed using local data characterizing a geographic location of the user or a predicted route of the user.
19. The method of any of claims 13 to 18, further comprising:
when the selection of the selectable assistant GUI element is not received to activate the driving mode of the automatic assistant:
causing the display interface or other interface of the computing device to present an indication that the selectable assistant GUI element is to be removed from the display interface after a threshold duration,
wherein the threshold duration is based on the fraction of the prediction that the user is traveling in the vehicle.
20. The method of any of claims 13-19, wherein determining the prediction that the user is traveling in the vehicle comprises:
the score of the prediction is generated based on whether the computing device has received vehicle state data from the vehicle indicating that the user is traveling in the vehicle.
CN202180096633.5A 2021-08-24 2021-11-30 Actively activating an auto-assistant driving mode to obtain varying degrees of confidence in travel detection Pending CN117157504A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/236,584 2021-08-24
US17/533,380 US20230062489A1 (en) 2021-08-24 2021-11-23 Proactively activating automated assistant driving modes for varying degrees of travel detection confidence
US17/533,380 2021-11-23
PCT/US2021/061237 WO2023027751A1 (en) 2021-08-24 2021-11-30 Proactively activating automated assistant driving modes for varying degrees of travel detection confidence

Publications (1)

Publication Number Publication Date
CN117157504A true CN117157504A (en) 2023-12-01

Family

ID=88885285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180096633.5A Pending CN117157504A (en) 2021-08-24 2021-11-30 Actively activating an auto-assistant driving mode to obtain varying degrees of confidence in travel detection

Country Status (1)

Country Link
CN (1) CN117157504A (en)

Similar Documents

Publication Publication Date Title
KR102505136B1 (en) Dynamically adapting provision of notification output to reduce user distraction and/or mitigate usage of computational resources
JP6827479B2 (en) Non-deterministic task initiation with personal assistant module
JP7341243B2 (en) Automated assistant proposal for third-party vehicle computing devices with restricted architecture
JP2023500048A (en) Using Automated Assistant Feature Correction for Training On-Device Machine Learning Models
US20240059149A1 (en) Adaptation(s) based on correlating hazardous vehicle events with application feature(s)
US20240078083A1 (en) Voice-controlled entry of content into graphical user interfaces
US20230385022A1 (en) Automated assistant performance of a non-assistant application operation(s) in response to a user input that can be limited to a parameter(s)
US20230062489A1 (en) Proactively activating automated assistant driving modes for varying degrees of travel detection confidence
CN117157504A (en) Actively activating an auto-assistant driving mode to obtain varying degrees of confidence in travel detection
EP4162233A1 (en) Proactively activating automated assistant driving modes for varying degrees of travel detection confidence
US20230335127A1 (en) Multiple concurrent voice assistants
US20240038246A1 (en) Non-wake word invocation of an automated assistant from certain utterances related to display content
US20230252984A1 (en) Providing contextual automated assistant action suggestion(s) via a vehicle computing device
US11959764B2 (en) Automated assistant that detects and supplements various vehicle computing device capabilities
US11885632B2 (en) Conditional preparation for automated assistant input from a user in a vehicle
US20240062757A1 (en) Generating and/or causing rendering of video playback-based assistant suggestion(s) that link to other application(s)
EP4248304A1 (en) Providing contextual automated assistant action suggestion(s) via a vehicle computing device
CN116802597A (en) Selectively rendering keyboard interfaces in response to assistant calls in some cases

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination