CN110109730B - Apparatus, method and graphical user interface for providing audiovisual feedback - Google Patents

Apparatus, method and graphical user interface for providing audiovisual feedback Download PDF

Info

Publication number
CN110109730B
CN110109730B CN201910417641.XA CN201910417641A CN110109730B CN 110109730 B CN110109730 B CN 110109730B CN 201910417641 A CN201910417641 A CN 201910417641A CN 110109730 B CN110109730 B CN 110109730B
Authority
CN
China
Prior art keywords
user interface
display
interface object
sound output
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910417641.XA
Other languages
Chinese (zh)
Other versions
CN110109730A (en
Inventor
M·I·布朗
A·E·西普林斯基
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/866,570 external-priority patent/US9928029B2/en
Application filed by Apple Inc filed Critical Apple Inc
Priority to CN201910417641.XA priority Critical patent/CN110109730B/en
Publication of CN110109730A publication Critical patent/CN110109730A/en
Application granted granted Critical
Publication of CN110109730B publication Critical patent/CN110109730B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present application relates to devices, methods and graphical user interfaces for providing audiovisual feedback. The electronic device provides data for presenting a user interface having a plurality of user interface objects. The current focus is on a first user interface object of the plurality of user interface objects. The device receives an input. In response, based on the direction and/or magnitude of the input, the device provides data for moving the current focus from the first user interface object to the second user interface object, and provides sound information to provide sound output concurrently with movement of the current focus from the first user interface object to the second user interface object. The pitch of the sound output is based on the size of the first user interface object, the type of the first user interface object, the size of the second user interface object, and/or the type of the second user interface object.

Description

Apparatus, method and graphical user interface for providing audiovisual feedback
The present application is a divisional application of the inventive patent application entitled "apparatus, method and graphical user interface for providing audiovisual feedback" with application date 2016, 8, 15 and application number 201610670699.1.
Technical Field
This document relates generally to electronic devices that provide sound output, and more particularly, to electronic devices that provide sound output in conjunction with a graphical user interface.
Background
Many electronic devices use an audiovisual interface as a way of providing feedback about a user's interactions with the device. Conventional methods for providing audiovisual feedback are limited. For example, simple audiovisual feedback provides only limited information to the user. If unexpected operations are performed based on simple audiovisual feedback, the user needs to provide additional input to cancel such operations. Therefore, these methods take longer than necessary, thereby wasting energy.
Disclosure of Invention
Accordingly, there is a need for an electronic device having a more efficient method and interface for providing audiovisual feedback. Such methods and interfaces optionally complement or replace conventional methods for providing audiovisual feedback. Such methods and interfaces reduce the number, extent, and/or nature of inputs from a user and result in a more efficient human-machine interface. Further, such an approach reduces processing power consumed to process touch inputs, saves power, reduces unnecessary/additional/duplicate inputs, and potentially reduces memory usage.
The above drawbacks and other problems associated with user interfaces for electronic devices having touch-sensitive surfaces are reduced or eliminated by the disclosed devices. In some embodiments, the device is a digital media player, such as Apple from Apple corporation of Coprinus, calif.
Figure GDA0002171206750000021
In some embodiments, the device is a desktop computer. In some embodiments, the device is portable (e.g., a notebook computer, tablet computer, or handheld device). In some embodiments, the device is a personal electronic device (e.g., a wearable electronic device such as a watch). In some embodiments, the device has a touch pad. In some embodiments, the device has a touch sensitive display (also referred to as a "touch screen" or "touch screen display"). In some embodiments, the device has a Graphical User Interface (GUI), one or more processors, memory, and one or more modules, programs, or sets of instructions stored in the memory for performing a plurality of functions. In some embodiments, the user interacts with the GUI primarily through the remote control (e.g., one or more buttons of the remote control and/or a touch-sensitive surface of the remote control). Executable instructions for performing these functions are optionally included on a non-transitory computer-readable storage medium or other computer program configured for execution by one or more processors In the sequential product. Alternatively or additionally, executable instructions for performing these functions may be included in a transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
According to some embodiments, a method is performed at an electronic device having one or more processors and memory. The device communicates with a display and an audio system. The method includes providing data to a display for presenting a user interface currently generated by the device. The user interface includes a first user interface object having a first visual characteristic. The user interface further includes a second user interface object having a second visual characteristic different from the first user interface object. The apparatus provides sound information for providing a sound output to an audio system. The sound output includes a first audio component corresponding to a first user interface object. The sound output further includes a second audio component corresponding to the second user interface object and different from the first audio component. When the user interface is presented on the display and the sound output is provided, the device provides data for updating the user interface to the display and provides sound information for updating the sound output to the audio system. Updating the user interface and updating the sound output includes: changing at least one of the first visual characteristics of the first user interface object in conjunction with changing the first audio component corresponding to the first user interface object; and changing at least one of the second visual characteristics of the second user interface object in conjunction with changing the second audio component corresponding to the second user interface object. Providing data for updating the user interface occurs independent of user input.
According to some embodiments, a method is performed at an electronic device having one or more processors and memory. The device communicates with a display and an audio system. The method comprises the following steps: the method includes providing data to a display for presenting a user interface having a plurality of user interface objects including a control user interface object at a first location on the display. The control user interface object is configured to control a respective parameter. The method further comprises the steps of: a first input corresponding to a first interaction with a control user interface object on a display is received. The method further comprises the steps of: when a first input corresponding to a first interaction with a control user interface object on a display is received: providing data to the display for moving the control user interface object from a first location on the display to a second location on the display that is different from the first location on the display in accordance with the first input; and providing first sound information to the audio system for providing a first sound output having one or more characteristics that are different from corresponding parameters controlled by the control user interface object and that change in accordance with movement of the control user interface object from a first position on the display to a second position on the display.
According to some embodiments, a method is performed at an electronic device having one or more processors and memory. The device communicates with a display and an audio system. The method comprises the following steps: data for presenting a first user interface having a plurality of user interface objects is provided to a display, wherein a current focus is on a first user interface object of the plurality of user interface objects. The method further comprises the steps of: when the display is presenting the first user interface, an input corresponding to a request to change a position of a current focus in the first user interface is received, the input having a direction and an amplitude. The method further comprises the steps of: in response to receiving an input corresponding to a request to change a position of a current focus in the first user interface: providing data to the display for moving the current focus from the first user interface object to the second user interface object, wherein the second user interface object is selected for the current focus according to the direction and/or magnitude of the input; and providing first sound information to the audio system for providing a first sound output corresponding to movement of the current focus from the first user interface object to the second user interface object, wherein the first sound output is provided concurrently with display of the current focus moving from the first user interface object to the second user interface object, and a pitch of the first sound output is determined based at least in part on a size of the first user interface object, a type of the first user interface object, a size of the second user interface object, and/or a type of the second user interface object.
According to some embodiments, a method is performed at an electronic device having one or more processors and memory. The device communicates with a display and an audio system. The method comprises the following steps: the display is provided with data for presenting a first video information user interface including descriptive information about the first video. The method further comprises the steps of: the audio system is provided with sound information for providing a first sound output corresponding to the first video during presentation of the first video information user interface by the display. The method further comprises the steps of: an input corresponding to a request for playback of a first video is received while a display is presenting a first video information user interface that includes descriptive information about the first video. The method further comprises the steps of: in response to receiving an input corresponding to a request for playback of the first video, data is provided to the display for replacement of the presentation of the first video information user interface with playback of the first video. The method further comprises the steps of: during playback of a first video, input corresponding to a request to display a second video information user interface with respect to the first video is received. The method further includes, in response to receiving an input corresponding to a request to display a second video information user interface for the first video: the method includes providing data to the display for replacing playback of the first video with a second video information user interface related to the first video, and providing sound information to the audio system for providing a second sound output, different from the first sound output, corresponding to the first video during presentation of the second video information user interface by the display.
According to some embodiments, a method is performed at an electronic device having one or more processors and memory. The device communicates with a display. The method comprises the following steps: data is provided to a display that presents a first video. The method further comprises the steps of: receiving input corresponding to a user request to pause the first video while the display is presenting the first video; and in response to receiving an input corresponding to a user request to pause the first video, pause presentation of the first video at a first playback position in a timeline of the first video. The method further comprises the steps of: after suspending the presentation of the first video at the first playback position in the timeline of the first video and when the presentation of the first video is suspended, data for presenting a plurality of selected still images from the first video is provided to the display. The plurality of selected still images is selected based on a first playback position at which the first video is paused.
According to some embodiments, the electronic device is in communication with a display unit configured to display a user interface and an audio unit configured to provide a sound output. The apparatus includes: and a processing unit configured to provide the display unit with data for presenting a user interface generated by the device. The user interface includes a first user interface object having a first visual characteristic. The user interface further includes a second user interface object having a second visual characteristic different from the first user interface object. The device is configured to provide sound information for providing a sound output to the audio unit. The sound output includes a first audio component corresponding to a first user interface object. The sound output further includes a second audio component corresponding to the second user interface object and different from the first audio component. When the user interface is presented on the display unit and the sound output is provided by the audio unit, the device provides data for updating the user interface to the display unit and provides sound information for updating the sound output to the audio unit. Updating the user interface and updating the sound output includes: changing at least one of the first visual characteristics of the first user interface object in conjunction with changing the first audio component corresponding to the first user interface object; and altering at least one of the second visual characteristics of the second user interface object in conjunction with altering the second audio component corresponding to the second user interface object. Providing data for updating the user interface occurs independent of user input.
According to some embodiments, an electronic device communicates with a display unit configured to display a user interface, an audio unit configured to provide sound output, and an optional remote control unit (which optionally includes a touch-sensitive surface unit) configured to detect user inputs and send them to the electronic device. The device includes a processing unit configured to provide data to a display unit for presenting a user interface having a plurality of user interface objects including a control user interface object at a first location on the display unit. The control user interface object is configured to control a respective parameter. The processing unit is further configured to receive a first input corresponding to a first interaction on the display unit controlling the user interface object. The processing unit is further configured to: when receiving a first input corresponding to a first interaction of a control user interface object on a display unit: providing data to the display unit for moving the control user interface object from a first location on the display unit to a second location on the display unit that is different from the first location on the display unit in accordance with the first input; and providing first sound information to the audio unit for providing a first sound output having one or more characteristics that are different from corresponding parameters controlled by the control user interface object and that change in accordance with movement of the control user interface object from a first position on the display unit to a second position on the display unit.
According to some embodiments, an electronic device communicates with a display unit configured to display a user interface, an audio unit configured to provide sound output, and an optional remote control unit (which optionally includes a touch-sensitive surface unit) configured to detect user inputs and send them to the electronic device. The apparatus includes: the processing unit is configured to provide data for presenting a first user interface having a plurality of user interface objects to the display unit, wherein the current focus is on a first user interface object of the plurality of user interface objects. The processing unit is further configured to receive an input corresponding to a request for changing a position of a current focus in the first user interface when the display unit is presenting the first user interface, the input having a direction and an amplitude. The processing unit is further configured to: in response to receiving an input corresponding to a request to change a position of a current focus in the first user interface: providing data for moving the current focus from the first user interface object to a second user interface object to the display unit, wherein the second user interface object is selected for the current focus according to the direction and/or magnitude of the input; and providing first sound information to the audio unit for providing a first sound output corresponding to movement of the current focus from the first user interface object to the second user interface object, wherein the first sound output is provided concurrently with display of the current focus moving from the first user interface object to the second user interface object, and a pitch of the first sound output is determined based at least in part on a size of the first user interface object, a type of the first user interface object, a size of the second user interface object, and/or a type of the second user interface object.
According to some embodiments, an electronic device communicates with a display unit configured to display a user interface, an audio unit configured to provide sound output, and an optional remote control unit (which optionally includes a touch-sensitive surface unit) configured to detect user inputs and send them to the electronic device. The apparatus includes: a processing unit configured to provide data for presenting a first video information user interface comprising descriptive information about the first video to the display unit. The processing unit is further configured to provide sound information to the audio unit for providing a first sound output corresponding to the first video during presentation of the first video information user interface by the display unit. The processing unit is further configured to receive an input corresponding to a request for playback of the first video when the display unit presents a first video information user interface comprising descriptive information about the first video. The processing unit is further configured to provide, to the display unit, data for replacing the presentation of the first video information user interface with playback of the first video in response to receiving an input corresponding to a request for playback of the first video. The processing unit is further configured to receive, during playback of the first video, an input corresponding to a request for displaying a second video information user interface with respect to the first video. The processing unit is further configured to, in response to receiving an input corresponding to a request for displaying a second video information user interface regarding the first video: providing data to the display unit for replacing playback of the first video with a second video information user interface relating to the first video; and providing sound information to the audio unit for providing a second sound output, different from the first sound output, corresponding to the first video during presentation of the second video information user interface by the display unit.
According to some embodiments, an electronic device includes a processing unit. The electronic device communicates with the display unit. The display unit is configured to display video playback information. The processing unit is configured to provide data for presenting the first video to the display unit; receiving an input corresponding to a user request for pausing the first video while the display unit is presenting the first video; in response to receiving an input corresponding to a user request to pause the first video, pausing presentation of the first video at a first playback position in a timeline of the first video; and after suspending the presentation of the first video at the first playback position in the timeline of the first video and when the presentation of the first video is suspended, providing data for presenting a plurality of selected still images from the first video to the display unit, wherein the plurality of selected still images are selected based on the first playback position at which the first video is suspended.
According to some embodiments, an electronic device communicates with a display, an audio system, and optionally a remote control (which optionally includes a touch-sensitive surface). The electronic device includes one or more processors, memory, and one or more programs; the one or more programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs include instructions for performing or causing the performance of the operations of any of the methods described herein. According to some embodiments, a computer-readable storage medium (e.g., a non-transitory computer-readable storage medium or alternatively a transitory computer-readable storage medium) has stored therein instructions that, when executed by an electronic device in communication with a display and an audio system, cause the device to perform or cause to perform the operations of any of the methods described herein. According to some embodiments, a graphical user interface on an electronic device with a display, a touch-sensitive surface, a memory, and one or more processors executing one or more programs stored in the memory includes one or more of the elements displayed in any of the methods described above updated in response to input as described in any of the methods described herein. According to some embodiments, an electronic device communicates with a display and an audio system. The electronic device comprises means for performing or causing to be performed the operations of any one of the methods described herein. According to some embodiments, an information processing apparatus for use in an electronic device in communication with a display and an audio system includes means for performing or causing the performance of the operations of any one of the methods described herein.
Accordingly, electronic devices in communication with displays and audio systems are provided with improved methods and interfaces for providing audiovisual feedback, thereby increasing the effectiveness, efficiency and user satisfaction with such devices. Such methods and interfaces may supplement or replace conventional methods for providing audiovisual feedback.
Drawings
For a better understanding of the various described embodiments, reference should be made to the following description of the embodiments taken in conjunction with the accompanying drawings, in which like reference numerals refer to corresponding parts throughout the drawings.
FIG. 1A is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments.
FIG. 1B is a block diagram illustrating exemplary components for event handling (handle) according to some embodiments.
Fig. 2 illustrates a portable multifunction device with a touch screen in accordance with some embodiments.
FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
Fig. 4A illustrates an exemplary user interface for a menu of applications on a portable multifunction device in accordance with some embodiments.
FIG. 4B illustrates an exemplary user interface for a multifunction device with a touch-sensitive surface separate from a display in accordance with some embodiments.
FIG. 4C illustrates an exemplary electronic device in communication with a display and a touch-sensitive surface, where the display and/or the touch-sensitive surface are integrated into the electronic device for at least a subset of the electronic devices, in accordance with some embodiments.
Fig. 5A-5 SS illustrate exemplary user interfaces for providing audiovisual feedback in accordance with some embodiments.
Fig. 6A-6C are flowcharts illustrating methods of changing visual characteristics of a user interface object in connection with changing an audio component corresponding to the user interface object, according to some embodiments.
Fig. 7A-7D are flowcharts illustrating methods of providing sound information corresponding to user interactions with user interface objects, according to some embodiments.
Fig. 8A-8C are flowcharts illustrating methods of providing sound information corresponding to user interactions with user interface objects, according to some embodiments.
Fig. 9A-9C are flowcharts illustrating methods of providing sound information for a video information user interface according to some embodiments.
Fig. 10A-10B illustrate flowcharts of methods of providing audiovisual information while a video is in a pause state in accordance with some embodiments.
Fig. 11 is a functional block diagram of an electronic device according to some embodiments.
Fig. 12 is a functional block diagram of an electronic device according to some embodiments.
Fig. 13 is a functional block diagram of an electronic device according to some embodiments.
Detailed Description
Many electronic devices update a graphical user interface and provide audio feedback in response to user input. Conventional approaches include providing simple audio feedback in response to the same user input. For example, the same audio feedback is provided in response to each user input corresponding to a request to move the current focus. Such simple audio feedback does not provide a context for the response of the device. If the user does not fully understand the context of the interaction, the user may perform unintended operations. Unexpected operations can be frustrating for a user. In addition, such unintended operations require cancellation of such unintended operations and re-provision of user input until the desired operation is performed, which can be cumbersome and inefficient.
In some embodiments described below, an improved method for providing audio feedback includes providing data for presenting a user interface with a control user interface object, such as a slider (thumb) of a slider bar. When an input is received, data is provided to move the data controlling the user interface object and sound information is provided for a sound output having characteristics that change as the user interface object is moved. Thus, the characteristics of the sound output are indicative of controlling movement of the user interface object.
Additionally, in some other embodiments described below, an improved method for providing audio feedback includes providing data for presenting a user interface having a plurality of icons, wherein a current focus is on a first icon. In response to receiving the input, data is provided to move the current focus to the second icon, and sound information is provided for a sound output, wherein a pitch of the sound output is determined based on a size or type of the first icon and/or a size or type of the second icon.
In addition, conventional methods for pausing a video include presenting a single image of the video at a location where the video was paused when playback of the video was paused. Users who pause playback of video and return at a later time to resume playback of video have limited information about where to play the video. Thus, after playback of the video is resumed, the user may take some time to understand the context of the video.
In some embodiments described below, an improved method for pausing playback of video includes providing data for presenting a plurality of still images from the video when playback of the video is paused. The plurality of still images from the video facilitates a user to understand the context of the video around where playback of the video is paused, even before playback of the video is resumed. Thus, the user can understand the context of the video shortly after playback of the video is resumed.
Moreover, conventional methods for presenting a video information user interface include providing a single sound output regardless of whether playback of the video has been initiated (e.g., whether the user has returned to the video information user interface after viewing at least a portion of the video). Thus, the sound output provides only limited fixed information about the video.
In some embodiments described below, an improved method for presenting a video information user interface includes: after playback of the video has been initiated, a sound output is provided that is different from the normal (stock) sound output so that the sound output can be used to convey additional information, such as mood in which playback of the video is interrupted.
Moreover, conventional methods for presenting screensavers include presenting video. However, the screen saver does not include sound output or includes limited sound output.
In some embodiments described below, an improved method for presenting a screensaver includes providing a sound output that includes an audio component corresponding to a user interface object displayed in the screensaver. Thus, the sound output may be used to audibly indicate additional information such as visual characteristics for the displayed user interface object and changes in the status of the screen saver.
In the following, fig. 1A-1B, 2 and 3 provide a description of an exemplary device. Fig. 4A to 4C and 5A to 5SS illustrate user interfaces for providing audio feedback. Fig. 6A-6C illustrate a flowchart of a method of changing visual characteristics of a user interface in connection with changing an audio component corresponding to a user interface object, according to some embodiments. Fig. 7A-7D illustrate a flowchart of a method of providing sound output information corresponding to a user's interaction with a user interface object, according to some embodiments. Fig. 8A-8C illustrate a flowchart of a method of providing sound output information corresponding to user interactions with user interface objects, according to some embodiments. Fig. 9A to 9C illustrate flowcharts of a method of providing sound output for a video information user interface. Fig. 10A to 10B illustrate flowcharts of a method of providing audiovisual information when a video is in a pause state. The user interfaces in fig. 5A to 5SS are used to illustrate the processes in fig. 6A to 6C, fig. 7A to 7D, fig. 8A to 8C, fig. 9A to 9C, and fig. 10A to 10B.
Exemplary apparatus
Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the various described embodiments. It will be apparent, however, to one skilled in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
It will be further understood that, although the terms first, second, etc. may be used herein to describe various elements in some examples, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first user interface object may be referred to as a second user interface object, and similarly, a second user interface object may be referred to as a first user interface object without departing from the scope of the various described embodiments. The first user interface object and the second user interface object are both user interface objects, but they are not the same user interface object unless the context clearly indicates otherwise.
The terminology used in the description of the various described embodiments is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term "if" may be interpreted to mean "when …" or "once …" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ stated condition or event ] is detected" is optionally interpreted to mean "upon determination" or "in response to determination" or "upon detection of a [ stated condition or event ]" or "in response to detection of a [ stated condition or event ]" depending on the context.
Embodiments of an electronic device, a user interface for such a device, and associated processes for using such a device are described. In some embodiments, the device is a digital media player, such as Apple from Apple corporation of Coprinus, calif.
Figure GDA0002171206750000131
In some embodiments, the device is a portable communication device (such as a mobile phone) that also contains other functions, such as PDA and/or music player functions. Exemplary embodiments of the portable multifunction device include, but are not limited to: appler from Coprinus, california>
Figure GDA0002171206750000132
iPod/>
Figure GDA0002171206750000133
And
Figure GDA0002171206750000134
an apparatus. Other portable electronic devices such as laptop or tablet computers with touch-sensitive surfaces (e.g., touch screen displays and/or touchpads) are optionally used. It should also be appreciated that in some embodiments, the device is not a portable communication device, but a desktop computer. In some embodiments, the desktop computer has a touch-sensitive surface (e.g., a touch screen display and/or a touchpad).
In the following discussion, an electronic device is described that communicates with and/or includes a display and a touch-sensitive surface. However, it should be understood that the electronic device may alternatively include one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick.
The device typically supports various applications, such as one or more of the following: note applications, drawing applications, presentation applications, word processing applications, website creation applications, disc authoring (disc authoring) applications, spreadsheet applications, gaming applications, telephony applications, video conferencing applications, email applications, instant messaging applications, workout support applications, photo management applications, digital camera applications, digital video recorder applications, web browsing applications, digital music player applications, and/or digital video player applications.
The various applications executing on the device optionally use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the device are optionally adjusted and/or changed from one application to the next and/or in the respective application. In this manner, a common physical architecture of the device (such as a touch-sensitive surface) utilizes a user interface that is intuitive and transparent to the user to optionally support various applications.
Attention is now directed to an embodiment of a portable device having a touch sensitive display. Fig. 1A is a block diagram illustrating a portable multifunction device 100 with a touch-sensitive display system 112 in accordance with some embodiments. Touch-sensitive display system 112 is sometimes referred to as a "touch screen" for convenience and is sometimes referred to simply as a touch-sensitive display. Device 100 includes memory 102 (which optionally includes one or more non-transitory computer-readable storage media), memory controller 122, one or more processing units (CPUs) 120, peripheral interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, input/output (I/O) subsystem 106, other input or control devices 116, and external ports 124. The device 100 optionally includes one or more optical sensors 164. The device 100 optionally includes one or more intensity sensors 165 for detecting intensity of a contact on the device 100 (e.g., a touch-sensitive surface, such as the touch-sensitive display system 112 of the device 100). Device 100 optionally includes one or more haptic output generators 167 for generating haptic outputs on device 100 (e.g., on a touch-sensitive surface, such as touch-sensitive display system 112 of device 100 or touch pad 335 of device 300). These components optionally communicate over one or more communication buses or signal lines 103.
As used in the specification and claims, the term "haptic output" refers to a previously positioned physical displacement of a device relative to the device, a physical displacement of a component of the device (e.g., a touch sensitive surface) relative to another component of the device (e.g., a housing), or a displacement of a component relative to a centroid of the device that is to be detected by a user with a user's feel. For example, in the case where the device or component of the device is in contact with a surface of a touch-sensitive user (e.g., a finger, palm, or other portion of the user's hand), the haptic output generated by the physical displacement will be interpreted by the user as a haptic sensation corresponding to a perceived change in a physical characteristic of the device or component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or touch pad) is optionally interpreted by a user as a "down click" or "up click" of a physical actuator button. In some cases, the user will feel a tactile sensation, such as a "down click" or an "up click", even when there is no movement of a physical actuator button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movement. As another example, movement of the touch-sensitive surface is optionally interpreted or perceived by the user as "roughness" of the touch-sensitive surface even when there is no change in the smoothing of the touch-sensitive surface. While such interpretation of touches by a user is affected by the user's personalized sensory perception, there are many sensory perceptions of touches that are common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception of a user (e.g., "click up", "click down", "roughness"), unless otherwise specified, the haptic output generated corresponds to a physical displacement of the device or component thereof that will generate the described sensory perception for a typical (or average) user.
It should be appreciated that the device 100 is only one example of a portable multifunction device and that the device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of components. The various components shown in fig. 1A are implemented in hardware, software, firmware, or a combination thereof (including one or more signal processing and/or application specific integrated circuits).
Memory 102 optionally includes high-speed random access memory, and optionally also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to the memory 102 by other components of the device 100, such as the CPU(s) 120 and the peripheral interface 118, is optionally controlled by a memory controller 122.
The peripheral interface 118 can be used to couple input and output peripheral devices of the device to the CPU(s) 120 and memory 102. The one or more processors 120 run or execute various software programs and/or sets of instructions stored in the memory 102 to perform various functions for the device 100 and process data.
In some embodiments, peripheral interface 118, CPU(s) 120, and memory controller 122 may be implemented on a single chip (such as chip 104). In some other embodiments, they are optionally implemented on separate chips.
RF (radio frequency) circuitry 108 receives and transmits RF signals, also referred to as electromagnetic signals. The RF circuitry 108 converts/converts electrical signals to/from electromagnetic signals and communicates with a communication network and other communication devices via the electromagnetic signals. RF circuitry 108 optionally includes well-known circuitry for performing these functions, including, but not limited to: an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a Subscriber Identity Module (SIM) card, memory, etc. RF circuitry 108 may optionally communicate via wireless communication with the internet, such as what is also known as the World Wide Web (WWW), an intranet, and/or a wireless network, such as a cellular telephone network, a wireless Local Area Network (LAN), and/or a Metropolitan Area Network (MAN), among other devices. The wireless communication optionally uses any of a variety of communication standards, protocols, and technologies, including, but not limited to: global system for mobile communications (GSM), enhanced Data GSM Environment (EDGE), high Speed Downlink Packet Access (HSDPA), high Speed Uplink Packet Access (HSUPA), evolution data only (EV-DO), HSPA, hspa+, dual cell HSPA (DC-HSPDA), long Term Evolution (LTE), near Field Communication (NFC), wideband code division multiple access (W-CDMA), code Division Multiple Access (CDMA), time Division Multiple Access (TDMA), bluetooth low energy (BTLE), wireless high fidelity (Wi-Fi) (e.g., IEEE802.11a, IEEE802.ac, IEEE802.11ax, IEEE802.11 b, IEEE802.11 g, and/or IEEE802.11 n), voice over internet protocol (VoIP), wi-MAX, protocols for e-mail (e.g., internet Message Access Protocol (IMAP) and/or Post Office Protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), session initiation protocol (ple) for instant messaging and presence balance extension), instant messaging and service (s)), and/or Short Message Service (SMS), or any other protocols not yet developed as appropriate.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between the user and device 100. Audio circuitry 110 receives audio data from peripheral interface 118, converts the audio data to electrical signals, and transmits the electrical signals to speaker 111. The speaker 111 converts the electrical signal into sound waves audible to humans. The audio circuit 110 also receives electrical signals converted from sound waves by the microphone 113. The audio circuitry 110 converts the electrical signals into audio data and transmits the audio data to the peripheral interface 118 for processing. The audio data is optionally retrieved from memory 102 and/or RF circuitry 108 and/or transferred to memory 102 and/or RF circuitry 108 through peripheral interface 118. In some embodiments, audio circuit 110 also includes a headphone jack (e.g., 212, fig. 2). The headphone jack provides an interface between the audio circuit 110 and removable audio input/output peripherals such as output-only headphones or headphones that can both output (e.g., monaural or binaural headphones) and input (e.g., microphones).
The I/O subsystem 106 couples input/output peripheral devices on the device 100, such as the touch screen 112 and other input or control devices 116, to the peripheral device interface 118. I/O subsystem 106 optionally includes a display controller 156, an optical sensor controller 158, an intensity sensor controller 159, a haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. The one or more input controllers 160 receive electrical signals from other input or control devices 116/send electrical signals to other input or control devices 116. Other input or control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click dials, and the like. In some alternative embodiments, input controller(s) 160 are optionally coupled with any (or none) of: a keyboard, an infrared port, a USB port, a stylus, and a pointing device such as a mouse. One or more buttons (e.g., 208, fig. 2) optionally include up/down buttons for volume control of speaker 111 and/or microphone 113. The one or more buttons optionally include a push button (e.g., 206, fig. 2).
The touch sensitive display system 112 provides an input interface and an output interface between the device and the user. The display controller 156 receives electrical signals from the touch sensitive display system 112 and/or transmits electrical signals to the touch sensitive display system 112. The touch sensitive display system 112 displays visual output to a user. The visual output optionally includes graphics, text, icons, video, and any combination of the foregoing (collectively, "graphics"). In some embodiments, some or all of the visual output corresponds to a user interface object.
The touch-sensitive display system 112 has a touch-sensitive surface, sensor, or set of sensors that accepts input from a user based on haptic and/or tactile contact. The touch-sensitive display system 112 and the display controller 156 (along with any associated modules and/or sets of instructions in the memory 102) detect contact (and any movement or blocking of the contact) on the touch-sensitive display system 112 and translate the detected contact into interactions with user interface objects (e.g., one or more soft keys, icons, web pages, or images) displayed on the touch-sensitive display system 112. In some embodiments, the point of contact between the touch-sensitive display system 112 and the user corresponds to a user's finger or stylus.
Touch-sensitive display system 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, but in other embodiments other display technologies are used. Touch sensitive display system 112 and display controller 156 optionally detect contact and any movement or blockage of contact using any of a variety of touch sensing technologies now known or later developed, including but not limited to: capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with the touch sensitive display system 112. In some embodiments, an Apple corporation such as that available in california, curbitino is used
Figure GDA0002171206750000181
iPod/>
Figure GDA0002171206750000182
And->
Figure GDA0002171206750000183
The projected mutual capacitance sensing technique found in (c).
The touch sensitive display system 112 optionally has a video resolution in excess of 100 dpi. In some embodiments, the touch screen video resolution exceeds 400dpi (e.g., 500dpi, 800dpi, or greater). The user optionally makes contact with the touch-sensitive display system 112 using any suitable object or appendage, such as a stylus, finger, or the like. In some embodiments, the user interface is designed to work with finger-based contacts and gestures, which may be less accurate than stylus-based inputs due to the larger contact area of the finger on the touch screen. In some embodiments, the device translates the coarse finger-based input into a precise pointer/cursor location or command for performing the action desired by the user.
In some embodiments, the device 100 optionally includes a touch pad (not shown) for activating or deactivating particular functions in addition to the touch screen. In some embodiments, the touch pad is a touch sensitive area of the device that does not display visual output unlike a touch screen. The touch pad is optionally an extension of a touch sensitive surface separate from the touch sensitive display system 112 or formed by a touch screen.
The apparatus 100 also includes a power system 162 for powering the various components. The power system 162 optionally includes a power management system, one or more power sources (e.g., battery, alternating Current (AC)), a charging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., light Emitting Diode (LED)), and any other components associated with the generation, management, and distribution of power in the portable device.
The device 100 optionally further comprises one or more optical sensors 164. FIG. 1A shows an optical sensor coupled to an optical sensor controller 158 in the I/O subsystem 106. The optical sensor(s) 164 optionally include a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The optical sensor(s) 164 receive light from the environment projected through one or more lenses and convert the light into data representing an image. In conjunction with the imaging module 143 (also referred to as a camera module), the optical sensor(s) 164 optionally capture still images or video. In some embodiments, the optical sensor is located on the back of the device 100, opposite the touch sensitive display system 112 on the front of the device, thereby enabling the touch screen display to function as a viewfinder for still and/or video image acquisition. In some embodiments, another optical sensor is located on the front of the device such that an image of the user is acquired (e.g., for self-timer shooting, for while the user is viewing other video conference participants on the touch screen display, etc.).
The device 100 optionally further comprises one or more contact strength sensors 165. FIG. 1A shows a contact intensity sensor coupled to an intensity sensor controller 159 in the I/O subsystem 106. The contact strength sensor(s) 165 optionally include one or more compressive resistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other strength sensors (e.g., sensors used to measure the force (or pressure) of a contact on a touch-sensitive surface). The contact strength sensor(s) 165 receive contact strength information (e.g., pressure information or a representation for pressure information) from the environment. In some embodiments, at least one contact intensity sensor is juxtaposed or in proximity to a touch-sensitive surface (e.g., touch-sensitive display system 112). In some embodiments, at least one contact intensity sensor is located on the back of the device 100, opposite the touch screen display system 112 located on the front of the device 100.
The device 100 optionally further includes one or more proximity sensors 166. Fig. 1A shows a proximity sensor 166 coupled to the peripheral interface 118. Alternatively, the proximity sensor 166 is coupled to the input controller 160 in the I/O subsystem 106. In some embodiments, the proximity sensor is turned off and the touch-sensitive display system 112 is disabled when the multifunction device is near the user's ear (e.g., when the user is making a phone call).
Device 100 optionally also includes one or more haptic output generators 167. FIG. 1A illustrates a haptic output generator coupled to a haptic feedback controller 161 in I/O subsystem 106. The haptic output generator(s) 167 optionally include one or more electroacoustic devices (such as speakers or other audio components) and/or electromechanical devices that convert energy into linear motion (such as motors, solenoids, electroactive polymers, piezoelectric actuators, electrostatic actuators, or other haptic output generating components (e.g., components that convert electrical signals into haptic output on a device)). In some embodiments, haptic output generator(s) 165 receives haptic feedback generation instructions from haptic feedback module 133 and generates haptic output on device 100 that can be felt by a user of device 100. In some embodiments, at least one haptic output generator is juxtaposed or in proximity to a touch-sensitive surface (e.g., touch-sensitive display system 112), and optionally generates a haptic output by moving the touch-sensitive surface vertically (e.g., into/out of the surface of device 100) or laterally (back and forth in the same plane as the surface of device 100). In some embodiments, at least one haptic output generator sensor is located on the back of device 100, opposite touch screen display system 112 located on the front of device 100.
The device 100 optionally further includes one or more accelerometers 168. Fig. 1A shows accelerometer 168 coupled to peripheral interface 118. Alternatively, accelerometer 168 may optionally be coupled to input controller 160 in I/O subsystem 106. In some embodiments, information is displayed on the touch screen display in a portrait view or a landscape view based on analysis of data received from one or more accelerometers. In addition to accelerometer(s) 168, device 100 optionally includes a magnetometer (not shown) and a GPS (or GLONASS or other global navigation system) receiver (not shown) for obtaining information regarding the position and orientation (e.g., longitudinal or lateral) of device 100.
In some embodiments, the software components stored in memory 102 include an operating system 126, a communication module (or instruction set) 128, a contact/motion module (or instruction set) 130, a graphics module (or instruction set) 132, a haptic feedback module (or instruction set) 133, a text input module (or instruction set) 134, a Global Positioning System (GPS) module (or instruction set) 135, and an application (or instruction set) 136. Further, as shown in fig. 1A and 3, in some embodiments, memory 102 (fig. 1A) or memory 370 (fig. 3) stores device/global internal state 157. The device/global internal state 157 includes one or more of the following: active application state, indicating the currently active application (if any); display status, indicating which applications, views, or other information occupy various areas of the touch-sensitive display system 112; sensor status, including information obtained from various sensors of the device and other input or control devices 116; and position and/or location information related to the position and/or posture of the device.
Operating system 126 (e.g., iOS, darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.), and facilitates communication between the various hardware and software components.
The communication module 128 facilitates communication with other devices on one or more external ports 124 and also includes various software components for handling data received through the RF circuitry 108 and/or external ports 124. External port 124 (e.g., universal Serial Bus (USB), FIREWIRE, etc.) is adapted to communicate directly or through a network (e.g., the internet, wirelessLAN, etc.) are indirectly coupled to other devices. In some embodiments, the external port is with Apple corporation in Coprinus
Figure GDA0002171206750000214
iPod/>
Figure GDA0002171206750000215
And->
Figure GDA0002171206750000216
The 30-pin connectors used in the devices are the same, similar, and/or compatible multi-pin (e.g., 30-pin) connectors. In some embodiments, the external port is +.A. with Apple Inc. of Coprinus, calif.>
Figure GDA0002171206750000211
iPod/>
Figure GDA0002171206750000212
And->
Figure GDA0002171206750000213
The lightning connectors used in the device are identical, similar and/or compatible.
The contact/motion module 130 optionally detects contact with the touch-sensitive display system 112 (in conjunction with the display controller 156) and other touch-sensitive devices (e.g., a touchpad or physical click wheel). The contact/motion module 130 includes various software components for performing various operations related to detection of a contact (e.g., by a finger or by a stylus), such as determining whether a contact has occurred (e.g., detecting a finger down event), determining the strength of a contact (e.g., force or pressure of a contact, or a substitute for force or pressure of a contact), determining whether there is movement of a contact and tracking movement across a touch-sensitive surface (e.g., detecting one or more finger drag events), and determining whether a contact has ceased (e.g., detecting a finger up event or contact blockage). The contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the contact (which is represented by a series of contact data) optionally includes determining a velocity (magnitude), a speed (magnitude and direction), and/or an acceleration (change in magnitude and/or direction) of the contact. These operations may alternatively be applied to a single contact (e.g., a one finger contact or a stylus contact), or multiple simultaneous contacts (e.g., "multi-touch"/multiple finger contacts). In some embodiments, the contact/motion module 130 and the display controller 156 detect contact on the touch pad.
The contact/motion module 130 optionally detects gestures input by the user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different movements, timings, and/or intensities of detected contacts). Thus, the gesture is optionally detected by detecting a specific contact pattern. For example, detecting a finger tap gesture includes: a finger down event is detected followed by a finger up (lift) event at the same location (or substantially the same location) as the finger down event (e.g., at the location of the icon). As another example, detecting a finger swipe (swipe) gesture on a touch surface includes: a finger down event is detected, followed by one or more finger drag events and then followed by a finger up (lift) event. Similarly, tap, swipe, drag, and other gestures are optionally detected for the stylus by detecting a particular contact pattern for the stylus.
Graphics module 132 includes various known software components for rendering and displaying graphics on touch-sensitive display system 112 or other displays, including components for changing the visual effects (e.g., brightness, transparency, saturation, contrast, or other visual properties) of the displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including, but not limited to: text, web pages, icons (such as user interface objects including soft keys), digital images, video, animations, and the like.
In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic is optionally assigned a corresponding code. The graphics module 132 receives one or more codes specifying graphics to be displayed from an application or the like along with (if needed) coordinate data and other graphics attribute data, and then generates screen image data for output to the display controller 156.
Haptic feedback module 133 includes various software components for generating instructions (e.g., instructions used by haptic feedback controller 161) used by haptic output generator 167 to generate haptic output at one or more locations on device 100 in response to user interaction with device 100.
Text input module 134 (which is optionally a component of graphics module 132) provides a soft keyboard for entering text into various applications (e.g., contacts 137, email 140, IM 141, browser 147, and any other application requiring text input).
The GPS module 135 determines the location of the device and provides this information for use in various applications (e.g., to the phone 138 for use in location-based dialing, to the camera 143 as picture/video metadata, and to applications providing location-based services such as weather widgets, local page widgets, and map/navigation widgets).
The application 136 optionally includes the following modules (or instruction sets) or a subset or superset thereof:
a contacts module 137 (sometimes referred to as an address book or contact list);
a telephone module 138;
video conferencing module 139;
email client module 140;
an Instant Messaging (IM) module 141;
exercise support module 142;
a camera module 143 for still and/or video images;
an image management module 144;
browser module 147;
calendar module 148;
a widget module 149 optionally including one or more of: weather gadgets 149-1, stock gadgets 149-2, calculator gadget 149-3, alarm gadget 149-4, dictionary gadget 149-5 and other gadgets obtained by the user, and user created gadgets 149-6;
a widget creator module 150 for making user-created widgets 149-6;
search module 151;
a video and music player module 152, optionally consisting of a video player module and a music player module;
a memo module 153;
map module 154; and/or
An online video module 155.
Examples of other applications 136 that may optionally be stored in the memory 102 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with the touch sensitive display system 112, the display controller 156, the contact/motion module 130, the graphics module 132, and the text input module 134, the contact module 137 includes executable instructions to manage an address book or contact list (e.g., stored in the application internal state 192 of the contact module 137 in memory 102 or memory 370), including: adding names to an address book; deleting names from the address book; associating a telephone number, email address, physical address, or other information with the name; associating the image with a name; classifying and sorting names; a telephone number and/or email address is provided to initiate and/or facilitate communication via telephone 138, video conference 139, email 140, or IM 141, etc.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch-sensitive display system 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, telephony module 138 includes a sequence of entering characters corresponding to telephone numbers, accessing one or more telephone numbers in address book 137, modifying telephone numbers that have been entered, dialing corresponding telephone numbers, conducting a conversation, and disconnecting or hanging up when the conversation is completed. As described above, wireless communication optionally uses any of a variety of communication standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch-sensitive display system 112, display controller 156, optical sensor(s) 164, optical sensor controller 158, contact/motion module 130, graphics module 132, text input module 134, contact list 137, and telephony module 138, videoconferencing module 139 includes executable instructions to initiate, conduct, and terminate a videoconference between a user and one or more other participants according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, touch module 130, graphics module 132, and text input module 134, email client module 140 includes executable instructions to create, send, receive, and manage emails in response to user instructions. In conjunction with the image management module 144, the email client module 140 makes it very easy to create and send emails with still or video images captured with the camera module 143.
In conjunction with the RF circuitry 108, the touch-sensitive display system 112, the display controller 156, the contact module 130, the graphics module 132, and the text input module 134, the instant messaging module 141 includes executable instructions to enter a sequence of characters corresponding to an instant message, to modify previously entered characters, to transmit the corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for a phone-based instant message, or using XMPP, SIMPLE, apple push notification service (APN) or IMPS for an internet-based instant message), to receive the instant message, and to view the received instant message. In some embodiments, the transmitted and/or received instant messages optionally include graphics, photographs, audio files, video files, and/or other attachments as supported in an MMS and/or Enhanced Messaging Service (EMS). As used herein, "instant messaging" refers to both telephone-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, APN or IMPS).
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and music player module 146, workout support module 142 includes a function to create a workout (e.g., with time, distance, and/or calorie burn goals); communicate with exercise sensors (in sports devices and smart watches); receiving exercise sensor data; calibrating a sensor used to monitor exercise; selecting and playing music for exercise; and executable instructions to display, store, and transmit the workout data.
In conjunction with the touch sensitive display system 112, the display controller 156, the optical sensor 164, the optical sensor controller 158, the contact/motion module 130, the graphics module 132, and the image management module 144, the camera module 143 includes executable instructions to capture still images or video (including video streams) and store them in the memory 102, modify characteristics of the still images or video, and/or delete the still images or video from the memory 102.
In conjunction with the touch-sensitive display system 112, the display controller 156, the contact/motion module 130, the graphics module 132, the text input module 134, and the camera module 143, the image management module 144 includes executable instructions to arrange, modify (e.g., edit), or otherwise manipulate, mark, delete, present (e.g., in a digital slide presentation or album), and store still and/or video images.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, touch module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions to browse the internet (including searching, linking to, receiving, and displaying web pages or portions of web pages, and attachments and other files linked to web pages) according to user instructions.
In conjunction with the RF circuitry 108, the touch-sensitive display system 112, the display system controller 156, the contact module 130, the graphics module 132, the text input module 134, the email client module 140, and the browser module 147, the calendar module 148 includes executable instructions to create, display, modify, and store a calendar and data associated with the calendar (e.g., calendar entries, to-do lists, etc.) according to user instructions.
In conjunction with the RF circuitry 108, the touch-sensitive display system 112, the display system controller 156, the contact/movement module 130, the graphics module 132, the text input module 134, and the browser module 147, the widget module 149 is a widget (e.g., weather widget 149-1, stock widget 149-2, calculator widget 149-3, alarm widget 149-4, and dictionary widget 149-5) or a widget created by a user (e.g., user-created widget 149-6) that is optionally downloaded and used by the user. In some embodiments, the gadgets include HTML (hypertext markup language) files, CSS (cascading style sheet) files, and JavaScript files. In some embodiments, the gadgets include XML (extensible markup language) files and JavaScript files (e.g., yahoo.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, and browser module 147, widget creator module 150 includes executable instructions to create a widget (e.g., to convert a user-specified portion of a web page into a widget).
In conjunction with the touch sensitive display system 112, the display system controller 156, the contact module 130, the graphics module 132, and the text input module 134, the search module 151 includes executable instructions to search text, music, sound, images, video, and/or other files in the memory 102 that match one or more search criteria (e.g., one or more user-specified search terms) according to user instructions.
In conjunction with the touch-sensitive display system 112, the display system controller 156, the contact module 130, the graphics module 132, the audio circuit 110, the speaker 111, the rf circuit 108, and the browser module 147, the video and music player module 152 includes executable instructions that allow a user to download and play back recorded music and other sound files stored in one or more file formats (such as MP3 or AAC files), and includes executable instructions to display, render, or otherwise play back video (e.g., on the touch-sensitive display system 112 or on a display connected wirelessly or externally via the external port 124). In some embodiments, the device 100 optionally includes functionality such as an MP3 player of an iPod (trademark of Apple Inc.).
In conjunction with the touch-sensitive display system 112, the display controller 156, the contact module 130, the graphics module 132, and the text input module 134, the memo module 153 includes executable instructions to create and manage memos, to-do lists, and the like, in accordance with user instructions.
In conjunction with the RF circuitry 108, the touch-sensitive display circuitry 112, the display controller 156, the contact module 130, the graphics module 132, the text input module 134, the GPS module 135, and the browser module 147, the map module 154 includes executable instructions to receive, display, modify, and store maps and data associated with maps (e.g., driving directions; data regarding shops and other points of interest at or near a particular location; and other location-based data) according to user instructions.
In conjunction with the touch sensitive display system 112, the display system controller 156, the contact module 130, the graphics module 132, the audio circuit 110, the speaker 111, the RF circuit 108, the text input module 134, the email client module 140, and the browser module 147, the online video module 155 includes executable instructions that allow a user to access, browse, receive (e.g., through streaming and/or downloading), play back a particular online video (e.g., on a touch screen or on a display connected wirelessly or externally via the external port 124), send an email with a link to the particular online video, and otherwise manage online video in one or more file formats such as h.264. In some embodiments, the instant messaging module 141 is used to send links to particular online videos instead of the email client module 140.
Each of the above-described modules and applications corresponds to a set of instructions for performing one or more of the functions described above, as well as the methods described in the present application (e.g., the computer-implemented methods described herein, as well as other information processing methods). These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may optionally be combined or otherwise rearranged in various embodiments. In some embodiments, memory 102 optionally stores a subset of the modules and data structures described above. Further, the memory 102 optionally stores additional modules and data structures not described above.
In some embodiments, device 100 is a device in which operations of a predetermined set of functions on the device are performed exclusively through a touch screen and/or touch pad. By using a touch screen and/or touch pad as the primary input control device for operation of the device 100, the number of physical input control devices (such as push buttons, dials, etc.) on the device 100 is optionally reduced.
The predetermined set of functions performed exclusively by the touch screen and/or the touch pad optionally includes navigation between user interfaces. In some embodiments, the touchpad, when touched by a user, navigates the device 100 from any user interface on the device 100 display to a home screen, or root menu. In such embodiments, the "menu button" is implemented using a touchpad. In some other embodiments, the menu buttons are physical push buttons or other physical input control devices rather than touch pads.
FIG. 1B is a block diagram illustrating exemplary components for event handling (handle) according to some embodiments. In some embodiments, memory 102 (in FIG. 1A) or memory 370 (FIG. 3) includes event sorter 170 (e.g., in operating system 126) and corresponding applications 136-1 (e.g., any of the aforementioned applications 136, 137-155, 380-390).
The event classifier 170 receives event information and determines the application 136-1 and the application view 191 to which the application 136-1 delivers the event information. Event classifier 170 includes event monitor 171 and event dispatcher module 174. In some embodiments, the application 136-1 includes an application internal state 192 that indicates the current application view(s) displayed on the touch-sensitive display system 112 when the application is active or executing. In some embodiments, the device/global internal state 157 is used by the event classifier 170 to determine those applications that are currently active, and the application internal state 192 is used by the event classifier 170 to determine the application view 191 to which to deliver event information.
In some embodiments, the application internal state 192 includes additional information, such as one or more of the following: restoration information to be used when the application 136-1 resumes execution, user interface state information indicating information that the application 136-1 is displaying or ready for display, and a state queue for previous states or views of the application 136-1 and a redo/undo queue for previous actions performed by the user.
Event monitor 171 receives event information from peripheral interface 118. The event information includes information about sub-events (e.g., user touches on the touch sensitive display system 112 as part of a multi-touch gesture). Peripheral interface 118 transmits information it receives from I/O subsystem 106 or sensors, such as proximity sensor 166, accelerometer 168, and/or microphone 113 (via audio circuitry 110). The information received by the peripheral interface 118 from the I/O subsystem 106 includes information from the touch-sensitive display system 112 or touch-sensitive surface.
In some embodiments, event monitor 171 sends requests to peripheral interface 118 at predetermined intervals. In response, the peripheral interface 118 transmits event information. In other embodiments, the peripheral interface 118 transmits event information only if there is an important event (e.g., an input is received that exceeds a predetermined noise threshold and/or is longer than a predetermined duration).
In some embodiments, event classifier 170 also includes hit view determination module 172 and/or active event recognizer determination module 173.
Hit view determination module 172 provides a software program for determining where sub-events occur in one or more views when touch sensitive display system 112 displays more than one view. The view is made up of controls and other elements that are visible to the user on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. An application view (of the respective application) in which a touch is detected may correspond to a program or program hierarchy in a view hierarchy of the application. For example, the lowest level view in which a touch is detected is alternatively referred to as a hit view, and the set of events identified as correctly entered may be optionally determined based at least in part on the hit view of the initial touch that initiates the touch-based gesture.
Hit view determination module 172 receives information regarding sub-events of touch-based gestures. When an application has multiple views organized in a hierarchy, hit view determination module 172 identifies the lowest level view in the hierarchy that should handle the sub-event as the hit view. In most cases, a hit view is the view of the lowest level in which the initiating sub-event (i.e., the first sub-event in the sequence of sub-events that forms an event or potential event) occurs. Once the hit view is identified by the hit view determination module, the hit view typically receives all sub-events related to the same touch or input source that caused it to be identified as the hit view.
The active event identifier determination module 173 determines that one or more views of a particular sequence of sub-events should be received in a view hierarchy. In some embodiments, the active event identifier determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, the active event identifier determination module 173 determines that all views including the physical location of the sub-event are actively involved views, and thus determines that all actively involved views should receive a particular sequence of sub-events. In other embodiments, even if a touch sub-event is fully defined to the region associated with a particular view, the view higher in the hierarchy will remain as the view effectively involved.
Event dispatcher module 174 dispatches event information to an event recognizer (e.g., event recognizer 180). In embodiments that include an active event recognizer determination module 173, the event dispatcher module 174 delivers event information to the event recognizers determined by the active event recognizer determination module 173. In some embodiments, the event dispatcher module 174 stores event information in an event queue, which is retrieved by the corresponding event receiver module 182.
In some embodiments, operating system 126 includes event classifier 170. Alternatively, application 136-1 includes event classifier 170. In still other embodiments, the event sorter 170 is a separate module or is part of another module stored in the memory 102 (such as the contact/motion module 130).
In some embodiments, the application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for handling touch events that occur within a respective view of a user interface of the application. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, the corresponding application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of the event recognizers 180 are part of a separate module, such as a user interface suite (not shown), or a higher-level object from which the application 136-1 inherits methods and other attributes. In some embodiments, each event handler 190 includes one or more of the following: the data updater 176, the object updater 177, the GUI updater 178, and/or event data 179 received from the event sorter 170. Event handler 190 optionally utilizes or invokes data updater 176, object updater 177, or GUI updater 178 to update the application internal state 192. Alternatively, one or more of application views 191 include one or more corresponding event handlers 190. Also, in some embodiments, one or more of the data updater 176, the object updater 177, and the GUI updater 178 are included in a respective application view 191.
The corresponding event identifier 180 receives event information (e.g., event data 179) from the event classifier 170 and identifies events based on the event information. Event recognizer 180 includes event receiver 182 and event comparator 184. In some embodiments, event identifier 180 further includes at least a subset of: metadata 183 and event delivery instructions 188 (which optionally include sub-event delivery instructions).
Event receiver 182 receives event information from event sorter 170. The event information includes information about sub-events (e.g., touches or touch movements). Depending on the sub-event, the event information also includes additional information, such as the location of the sub-event. When the sub-event relates to movement of a touch, the event information also optionally includes the speed and direction of the sub-event. In some embodiments, the event includes a rotation of the device from one orientation to another (e.g., a rotation from portrait to landscape, and vice versa), and the event information includes corresponding information about a current orientation of the device (also referred to as a device pose).
The event comparator 184 compares the event information with a predetermined event or sub-event definition and determines an event or sub-event based on the comparison or determines or updates the state of the event or sub-event. In some embodiments, event comparator 184 includes event definition 186. Event definition 186 contains definitions of events (e.g., a predetermined sequence of sub-events), such as event 1 (187-1), event 2 (187-2), and so forth. In some embodiments, sub-events in event 187 include, for example, touch start, touch end, touch move, touch cancel, and multi-touch. In one example, the definition for event 1 (187-1) is a double tap of the displayed object. The double tap includes, for example, a first touch (touch start) of a predetermined stage, a first lift-off (touch end) of a predetermined stage, a second touch (touch start) of a predetermined stage, and a second lift-off (touch end) of a predetermined stage, of the displayed object. In another example, the definition for event 2 (187-2) is a drag on the display object. The drag includes, for example, a touch (or contact) to a displayed object for a predetermined period, movement of the touch across the touch-sensitive display system 112, and lifting of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190.
In some embodiments, the event definitions 187 include definitions of events for respective user interface objects. In some embodiments, event comparator 184 performs a hit test to determine which user interface object is associated with a sub-event. For example, in an application view in which three user interface objects are displayed on touch-sensitive display system 112, when a touch is detected on touch-sensitive display system 112, event comparator 184 performs a hit test to determine which of the three user interface objects (if any) is associated with the touch (sub-event). If each display object is associated with a respective event handler 190, the event comparator uses the results of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects an event handler associated with the sub-event and object that triggered the hit test.
In some embodiments, the definition for each event 187 also includes a delayed action that delays delivery of the event information until it has been determined whether the sequence of sub-events corresponds to the event type of the event recognizer.
When each event recognizer 180 determines that a series of sub-events does not match any of the events in the event definition 186, the corresponding event recognizer 180 enters an event impossible, event failed, or event end state, after which the corresponding event recognizer 180 ignores subsequent sub-events of the touch-based gesture. In this case, the sub-events of the ongoing touch-based gesture continue to be tracked and processed for other event recognizers (if any) that hit the view remains active.
In some embodiments, each event recognizer 180 includes metadata 183 with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to the effectively involved event recognizer. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate how or how event identifiers can interact with each other. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate whether sub-events are delivered to different levels in a view or program hierarchy.
In some embodiments, each event recognizer 180 activates an event handler 190 associated with an event when one or more particular sub-events of the event are recognized. In some embodiments, each event recognizer 180 delivers event information associated with an event to event handler 190. Activating event handler 190 is different from sending (or deferring sending) sub-events to the corresponding hit view. In some embodiments, event recognizer 180 throws a flag associated with the recognized event, and event handler 190 associated with the flag grasps the flag and performs a predetermined procedure.
In some embodiments, the event delivery instructions 188 include sub-event delivery instructions that deliver event information about sub-events without activating the event handler. Instead, the sub-event delivery instruction delivers event information to an event handler associated with a series of sub-events or views that are effectively involved. An event handler associated with a series of sub-events or with a view that is effectively involved receives the event information and performs a predetermined procedure.
In some embodiments, the data updater 176 creates and updates data used in the application 136-1. For example, the data updater 176 updates phone numbers used in the contacts module 137, or stores video files used in the video player module 145. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, the object updater 176 creates a new user interface object or updates the positioning of the user interface object. GUI updater 178 updates the GUI. For example, the GUI updater 178 prepares the display information and sends it to the graphics module 132 for display on a touch-sensitive display.
In some embodiments, event handler 190 includes or has access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, the data updater 176, the object updater 177, and the GUI updater 178 are included in a single module of the respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
It should be appreciated that the foregoing discussion regarding event handling of user touches on a touch sensitive display also applies to other forms of user inputs that operate the multifunction device 100 with an input device, where not all user inputs are initiated on a touch screen. For example, mouse movements and mouse button presses optionally in conjunction with single or multiple keyboard presses or hold presses; contact movement (such as tapping, dragging, scrolling, etc.) on the touch pad; stylus input, movement of the device; verbal instructions; detected eye movement, input of a biometric; and/or any combination of the above, optionally as input corresponding to defining sub-events to be identified.
Fig. 2 illustrates a portable multifunction device 100 with a touch screen 112 (e.g., touch-sensitive display system 112, fig. 1A) in accordance with some embodiments. The touch screen optionally displays one or more graphics within a User Interface (UI) 200. In these embodiments, as well as other embodiments described below, the user is enabled to select one or more of the graphics by making a gesture to the graphics (e.g., by one or more fingers 202 (not drawn to scale in the figures) or one or more styluses (not drawn to scale in the figures)). In some embodiments, the selection of one or more graphics occurs when the user blocks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up and/or down), and/or a rotation of a finger that has been in contact with the device 100 (right to left, left to right, up and/or down). In some implementations or situations, unintentional contact with the graphic does not select the graphic. For example, when the gesture corresponding to the selection is a tap, the swipe gesture that sweeps across the application icon optionally does not select the corresponding application.
The device 100 optionally also includes one or more physical buttons, such as a "home" or menu button 204. As previously described, menu button 204 is optionally used to navigate to any application 136 in the set of applications that are optionally executing on device 100. Alternatively, in some embodiments, the menu buttons are implemented as soft keys in a GUI displayed on a touch screen display.
In some embodiments, device 100 includes a touch screen display, menu buttons 204, a press button 206 for turning on/off device power and locking the device, volume adjustment button(s) 208, subscriber Identity Module (SIM) card slot 210, headphone jack 212, and docking/charging external port 124. The push button 206 is optionally used to turn on/off the device power by depressing the button and holding the button in a depressed state for a predetermined time interval; locking the device by depressing the button and releasing the button before a predetermined time interval has elapsed; and/or unlock the device or initiate an unlocking process. In some embodiments, the device 100 also accepts verbal input through the microphone 113 for activating or deactivating certain functions. The device 100 optionally further comprises one or more contact intensity sensors 165 for detecting contact intensity on the touch sensitive display system 112 and/or one or more tactile output generators 167 for generating tactile outputs for a user of the device 100.
FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 300 need not be portable. In some embodiments, device 300 is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child's learning toy), a gaming device, or a control device (e.g., a home or industrial controller). The device 300 generally includes one or more processing units (CPUs) 310, one or more network or other communication interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. Communication bus 320 optionally includes circuitry (sometimes referred to as a chipset) that interconnects and controls communications between system components. The device 300 includes an input/output (I/O) interface 330 that includes a display 340, which is typically a touch screen display. I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and touchpad 355, a haptic output generator 357 (e.g., similar to haptic output generator 167 described above with reference to FIG. 1A) for generating haptic output on device 300, a sensor 359 (e.g., an optical, acceleration, proximity, touch-sensitive, and/or contact intensity sensor similar to contact intensity sensor 165 described above with reference to FIG. 1A). Memory 370 includes high-speed random access memory such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and optionally includes non-volatile memory such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 370 optionally includes one or more storage devices remote from CPU(s) 310. In some embodiments, memory 370 stores programs, modules, and data structures similar to those stored in memory 102 of portable multifunction device 100 (fig. 1) or a subset thereof. Further, the memory 370 optionally stores additional programs, modules, and data structures that are not present in the memory 102 of the portable multifunction device 100. For example, the memory 370 of the device 300 optionally stores the drawing module 380, the presentation module 382, the word processing module 384, the website creation module 386, the disk writing module 388, and/or the spreadsheet module 390, while the memory 102 of the portable multifunction device 100 (FIG. 1A) optionally does not store these modules.
Each of the above elements in fig. 3 may optionally be stored in one or more of the aforementioned memory devices. Each of the above-described modules corresponds to a set of instructions for performing the functions described above. The above-described modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may optionally be combined or otherwise rearranged in various embodiments. In some embodiments, memory 370 optionally stores a subset of the modules and data structures described above. Further, memory 370 optionally stores additional modules and data structures not described above.
Attention is now directed to an embodiment of a user interface ("UI") that may optionally be implemented on the portable multifunction device 100.
Fig. 4A illustrates an exemplary user interface for a menu of applications on the portable multifunction device 100 in accordance with some embodiments. A similar user interface is optionally implemented on device 300. In some embodiments, the user interface 400 includes the following elements, or a subset or superset thereof:
signal strength indicator(s) 402 for wireless communication(s), such as cellular signals and Wi-Fi signals;
Time 404;
a bluetooth indicator;
a battery status indicator 406;
a tray 408 with icons for frequently used applications, such as:
icon 416 of phone module 138, labeled "phone," optionally includes an indicator 414 of the number of missed calls or voice email messages;
an icon 418 of the email client module 140, labeled "mail," optionally including an indicator 410 of the number of unread emails;
icon 420 of browser module 147, labeled "browser"; and
the icon 422 of the video and music player module 152, also referred to as the iPod (trademark of Apple corporation) module 152, is labeled "iPod"; and
icons of other applications, such as:
icon 424 of IM module 141, labeled "message";
icon 426 of calendar module 148, labeled "calendar";
icon 428 of image management module 144, labeled "photo";
icon 430 of camera module 143, labeled "camera";
icon 432 of online video module 155, labeled "online video";
icon 434 of stock widget 149-2, labeled "stock";
icon 436 of map module 154 is labeled "map";
Icon 438 of weather widget 149-1, labeled "weather";
icon 440 of alarm clock widget 149-4, labeled "clock";
icon 442 of exercise support module 142 is labeled "exercise support";
icon 444 of memo module 153, labeled "memo"; and
an icon 446 for setting an application or module that provides access to the settings of the device 100 and its various applications 136.
It should be noted that the iconic labels illustrated in fig. 4A are merely exemplary. For example, in some embodiments, the icon 422 of the video and music player module 152 is labeled "music" or "music player". Other labels may alternatively be used for various application icons. In some embodiments, the labels of the respective application icons include application names corresponding to the respective application icons. In some embodiments, the labels of the particular application icons are different from the application names corresponding to the particular application icons.
Fig. 4B illustrates an exemplary user interface on a device (e.g., device 300 of fig. 3) having a touch-sensitive surface 451 (e.g., tablet computer or touchpad 355 of fig. 3) separate from display 450. The device 300 also optionally includes one or more contact intensity sensors (e.g., one or more of the sensors 357) for detecting contact intensity on the touch-sensitive surface 451 and/or one or more tactile output generators 359 for generating tactile outputs for a user of the device 300.
Fig. 4B illustrates an exemplary user interface on a device (e.g., device 300 of fig. 3) having a touch-sensitive surface 451 (e.g., tablet computer or touchpad 355 of fig. 3) separate from display 450. Many of the examples below will be given with reference to a device that detects an input on a touch-sensitive surface separate from a display, as shown in fig. 4B. In some embodiments, the touch-sensitive surface (e.g., 451 in fig. 4B) has a primary axis (e.g., 452 in fig. 4B) that corresponds to the primary axis (e.g., 453 in fig. 4B) on the display (e.g., 450). According to these embodiments, the device detects contact (e.g., 460 and 462 in fig. 4B) with the touch-sensitive surface 451 at a location corresponding to a respective location on the display (e.g., 460 corresponds to 468 and 462 corresponds to 470 in fig. 4B). In this way, when the touch-sensitive surface is separated from the display, user inputs (e.g., contacts 460 and 462 and movements thereof) detected by the device on the touch-sensitive surface (e.g., 451 in FIG. 4B) are used by the device to manipulate a user interface on the display (e.g., 450 in FIG. 4B) of the multifunction device. It should be appreciated that similar approaches may alternatively be used for other user interfaces described herein.
Furthermore, while the following examples are primarily presented with respect to finger inputs (e.g., finger contacts, finger tap gestures, finger swipe gestures, etc.), it should be understood that in some embodiments one or more of the finger inputs may be replaced with an input from another input device (e.g., mouse-based input or stylus input) or another type of input (button press) on the same device. For example, the swipe gesture is optionally replaced with a mouse click (e.g., rather than a contact) followed by movement of the cursor along the swipe path (e.g., rather than movement of the contact). As another example, the tap gesture is optionally replaced with a mouse click when the cursor is located at the location of the tap gesture (e.g., instead of detecting a contact and then ceasing to detect the contact). Similarly, when multiple user inputs are detected simultaneously, it should be appreciated that multiple computer mice may alternatively be used simultaneously, or that the mice and finger contacts may alternatively be used simultaneously.
Fig. 4C illustrates an exemplary electronic device in communication with display 450 and touch-sensitive surface 451. According to some embodiments, for at least a subset of the electronic devices, the display 450 and/or the touch-sensitive surface 451 are integrated into the electronic devices. While the examples described in more detail below are described with reference to touch-sensitive surface 451 and display 450 in communication with an electronic device (e.g., portable multifunction device 100 in fig. 1A-1B or device 300 in fig. 3), it should be understood that, according to some embodiments, the touch-sensitive surface and/or display are integrated with the electronic device, while in other embodiments, one or more of the touch-sensitive surface and display are separate from the electronic device. Further, in some embodiments, the electronic device has an integrated display and/or an integrated touch-sensitive surface and communicates with one or more additional displays and/or touch-sensitive surfaces separate from the electronic device.
In some embodiments, all of the operations described below with reference to fig. 5A-5 SS, 6A-6C, 7A-7D, 8A-8C, 9A-9C, and 10A-10B are performed on a single electronic device (e.g., computing device a described below with reference to fig. 4C) having user interface navigation logic 480. However, it should be understood that often multiple different electronic devices are linked together to perform the operations described below with reference to fig. 5A-5 SS, 6A-6C, 7A-7D, 8A-8C, 9A-9C, and 10A-10B (e.g., an electronic device with user interface navigation logic 480 is in communication with a separate electronic device with display 450 and/or a separate electronic device with touch-sensitive surface 451). In any of these embodiments, the electronic device described below with reference to fig. 5A-5 SS, 6A-6C, 7A-7D, 8A-8C, 9A-9C, and 10A-10B is the electronic device(s) that contain the user interface navigation logic 480. Further, it should be appreciated that in various embodiments, the user interface navigation logic 480 may be divided among a plurality of different modules or electronic devices; for purposes of the description herein, however, the user interface navigation logic 480 will be referred to primarily as residing in a single electronic device in order to unnecessarily obscure other aspects of the embodiments.
In some embodiments, user interface navigation logic 480 includes one or more modules (e.g., one or more event handlers 190, including one or more object updaters 177 and one or more GUI updaters 178, as described in more detail above with reference to fig. 1B) that receive interpreted inputs and, in response to these interpreted inputs, generate instructions for updating a graphical user interface in accordance with the interpreted inputs, which are then used to update the graphical user interface on the display. In some embodiments, the interpreted input is an input that has been detected (e.g., by the contact motion 130 in fig. 1A-1B and 3), identified (e.g., by the event identifier 180 in fig. 1B), and/or prioritized (e.g., by the event classifier 170 in fig. 1B). In some embodiments, the interpreted input is generated by a module at the electronic device (e.g., the electronic device receives raw contact input data in order to identify a gesture from the raw contact input data). In some embodiments, some or all of the interpreted input is received by the electronic device as interpreted input (e.g., the electronic device including touch-sensitive surface 451 processes the raw contact input data to identify a gesture from the raw contact input data and sends information indicative of the gesture to the electronic device including user interface navigation logic 480).
In some embodiments, both the display 450 and the touch-sensitive surface 451 are integrated with an electronic device (e.g., computing device a in fig. 4C) that includes user interface navigation logic 480. For example, the electronic device may be a desktop or laptop computer with an integrated display (e.g., 340 in fig. 3) and a touch pad (e.g., 355 in fig. 3). As another example, the electronic device may be a portable multifunction device 100 (e.g., a smart phone, PDA, tablet computer, etc.) having a touch screen (e.g., 122 in fig. 2).
In some embodiments, touch-sensitive surface 451 is integrated with an electronic device, while display 450 is not integrated with an electronic device (e.g., computing device B in fig. 4C) that contains user interface navigation logic 480. For example, the electronic device may be a device 300 (e.g., a desktop computer or laptop computer) having an integrated touch pad (e.g., 355 in fig. 3) connected (via a wired or wireless connection) to a separate display (e.g., a computer monitor, television, etc.). As another example, the electronic device may be a portable multifunction device 100 (e.g., smart phone, PDA, tablet computer, etc.) having a touch screen (e.g., 122 in fig. 2) connected (via a wired or wireless connection) to a separate display (e.g., computer monitor, television, etc.).
In some embodiments, the display 450 is integrated with an electronic device, while the touch-sensitive surface 451 is not integrated with an electronic device (e.g., computing device C in fig. 4C) that contains the user interface navigation logic 480. For example, the electronic device may be a device 300 (e.g., desktop computer, laptop computer, television with integrated set-top box) with an integrated display (e.g., 340 in fig. 3) connected (via a wired or wireless connection) to a separate touch-sensitive surface (e.g., remote touchpad, portable multifunction device, etc.). As another example, the electronic device may be a portable multifunction device 100 (e.g., a smart phone, PDA, tablet, etc.) having a touch screen (e.g., 112 in fig. 2) connected (via a wired or wireless connection) to a separate touch-sensitive surface (e.g., a remote touch pad, another portable multifunction device having a touch screen that serves as a remote touch pad, etc.).
In some embodiments, neither the display 450 nor the touch-sensitive surface 451 is integrated with an electronic device (e.g., computing device D in fig. 4C) that contains the user interface navigation logic 480. For example, the electronic device may be a separate electronic device 300 (e.g., a desktop computer, a laptop computer, a console, a set-top box, etc.) connected (via a wired or wireless connection) to a separate touch-sensitive surface (e.g., a remote touch pad, a portable multifunction device, etc.) and a separate display (e.g., a computer monitor, a television, etc.). As another example, the electronic device may be a portable multifunction device 100 (e.g., a smart phone, PDA, tablet, etc.) having a touch screen (e.g., 112 in fig. 2) connected (via a wired or wireless connection) to a separate touch-sensitive surface (e.g., a remote touch pad, another portable multifunction device having a touch screen that serves as a remote touch pad, etc.).
In some embodiments, the computing device has an integrated audio system. In some embodiments, the computing device communicates with an audio system that is separate from the computing device. In some embodiments, an audio system (e.g., an audio system integrated in a television unit) is integrated with a separate display 450. In some embodiments, the audio system (e.g., stereo) is a separate system from the computing device and display 450.
User interface and associated process
Attention is now directed to embodiments of a user interface ("UI") and associated processes that may be implemented with and/or include an electronic device (such as one of computing devices a-D in fig. 4C) in communication with a display and a touch-sensitive surface.
Fig. 5A-5 SS illustrate exemplary user interfaces for providing audio feedback according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 6A-6C, 7A-7D, 8A-8C, 9A-9C, and 10A-10B. While some of the examples below will be given with reference to inputs on a touch-sensitive surface 451 separate from display 450, in some embodiments, the device detects inputs on a touch screen display (where the touch-sensitive surface and display are combined), as shown in fig. 4A.
Attention is now directed to embodiments of a user interface ("UI") and associated processes that may be implemented on an electronic device in communication with a display and an audio system, such as portable multifunction device 100 or device 300, as shown in fig. 4C. In some embodiments, the electronic device includes a display. In some embodiments, the electronic device includes an audio system. In some embodiments, the electronic device includes neither a display nor an audio system. In some embodiments, the display includes an audio system (e.g., the display and audio system are components of a television). In some embodiments, some components of the audio system and the display are separate (e.g., the display is a component of a television, and the audio system includes a bar stereo separate from the television). In some embodiments, the electronic device communicates with a separate remote control through which the electronic device receives user input (e.g., the remote control includes a touch-sensitive surface or touch screen through which a user interacts with the electronic device). In some embodiments, the remote control includes a motion sensor (e.g., an accelerometer and/or a gyroscope) that detects motion of the remote control (e.g., a user picks up the remote control).
Fig. 5A-5G illustrate exemplary user interfaces for changing visual characteristics of a user interface in connection with changing audio components corresponding to user interface objects, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 6A-6C.
Fig. 5A illustrates the display of a user interface 517 generated by the device on the display 450. In some embodiments, the visual characteristics of the various user interface objects described with reference to fig. 5A-5G are determined independent of user input (e.g., the visual characteristics of the user interface objects are determined in the absence of user input). In some embodiments, user interface 517 is a screen saver user interface.
The user interface 517 includes a first user interface object 501-a (e.g., a first bubble). The first user interface object 501-a has various visual characteristics including shape (e.g., circular), size, and location on the display 450. The device also provides (e.g., concurrently with providing data to the display 450) a first audio component 503 corresponding to the sound output of the first user interface object 501-a to an audio system (e.g., a speaker system on the display 450 or a separate audio system).
In some embodiments, the one or more characteristics of the first audio component 503 associated with the first user interface object 501-a correspond to visual characteristics of the first user interface object 501-a. For example, as shown in the audio map 516, the pitch of the first audio component 503 corresponds to the initial size of the first user interface object 501-a (the pitch of the first audio component 503 is represented by the vertical position of the circle representing the first audio component 503 in the audio map 516). As another example, the stereo balance of the first audio component 503 (e.g., left/right distribution in the audio map 516) corresponds to the horizontal position of the first user interface object 501-a on the display 450. In some embodiments, one or more characteristics of the first audio component 503 corresponding to the first user interface object 501-a are determined from one or more visual characteristics of the first user interface object 501-a. Alternatively, in some embodiments, one or more visual characteristics of the first user interface object 501-a are determined from one or more characteristics of the first audio component 503.
When the user interface 517 is presented on the display 450 and the sound output is provided by the audio system, the device provides data to the display 450 for updating the user interface 517 (e.g., the first user interface object 501-a moves across the display 450 and the first user interface object 501-a increases in size, as shown in fig. 5B). Providing data for updating the user interface 517 occurs independent of user input (e.g., no user input is detected on the remote control 5001 in fig. 5A). The device also provides sound information to the audio system for updating the sound output as illustrated in the audio map 516 in fig. 5B (e.g., the stereo balance of the audio component 503 shifts to the right as represented by the movement of the graphical representation of the audio component 503 to the right in the audio map 516 in fig. 5B, and the volume of the audio component 503 decreases as represented by the decreasing size of the graphical representation of the audio component 503 in the audio map 516 in fig. 5B).
Fig. 5B shows user interface 517 at a time shortly after fig. 5A. In FIG. 5B, the user interface 517 includes a second user interface object 501-B (e.g., a second bubble) having a visual characteristic that is optionally different from the visual characteristic of the first user interface object 501-a (e.g., the second user interface object 501-B is different in location and size from the first user interface object 501-a). The device also provides (e.g., concurrently with providing data to the display 450) a second audio component 505 of the sound output corresponding to the second user interface object 501-b to the audio system. For example, because the initial size of the second user interface object 501-B (FIG. 5B) is larger than the initial size of the first user interface object 501-a (FIG. 5A), the audio component 505 is pitched lower than the first audio component 503 (represented by the lower position of the second audio component 505 in the audio map 516 in FIG. 5B). In some embodiments, the second audio component 505 is selected based at least in part on the first audio component 503. For example, in some embodiments, the first audio component 503 and the second audio component 505 have respective pitches of two pitches (e.g., notes) that make up a chord (e.g., a minor chord).
As shown in fig. 5B, updating the user interface 517 and updating the sound output includes changing at least one of the visual characteristics of the first user interface object 501-a in conjunction with changing the first audio component 503 in a manner corresponding to the changed visual characteristics of the first user interface object 501-a. For example, the first user interface object 501-a in FIG. 5B has been enlarged and correspondingly the volume of the first audio component 503 has been reduced in FIG. 5B as compared to the first user interface object 501-a in FIG. 5A.
Fig. 5C shows user interface 517 at a time shortly after fig. 5B. In FIG. 5C, the user interface 517 includes a third user interface object 501-C (e.g., a third bubble) having a visual characteristic that is optionally different from the visual characteristics of the first user interface object 501-a and the second user interface object 501-b (e.g., the location and size of the third user interface object 501-C is different from the location and size of the first user interface object 501-a and the location and size of the second user interface object 501 b). The device also provides (e.g., concurrently with providing data to the display 450) a third audio component 507 of the sound output corresponding to the third user interface object 501-c to the audio system. In some embodiments, because the initial size of the third user interface object 501-C (FIG. 5C) is smaller than the initial size of the second user interface object 501-B (shown in FIG. 5B) or the first user interface object 501-a (shown in FIG. 5A), the third audio component 507 is higher pitch than the first audio component 503 or the second audio component 505 (represented by the higher vertical position of the audio component 507 in the audio map 516 in FIG. 5C), as depicted in FIG. 5C. In some embodiments, the third audio component 507 is selected based at least in part on the first audio component 503. For example, in some embodiments, the first audio component 503, the second audio component 505, and the third audio component 507 have respective pitches of three pitches (e.g., notes) that make up a chord (e.g., an a-key chord).
As shown in fig. 5C, updating the user interface 517 and updating the sound output includes changing at least one of the visual characteristics of the second user interface object 501-b in conjunction with changing the second audio component 505 in a manner corresponding to the changed visual characteristics of the second user interface object 501-b. For example, fig. 5C shows that the second user interface object 501-B has been enlarged compared to fig. 5B and correspondingly the volume of the second audio component 505 has been reduced in fig. 5C (e.g., as represented by the reduced size of the graphical representation of the audio component 505 in the audio map 516 in fig. 5C). In addition, the visual characteristics of the first user interface object 501-a and the corresponding first audio component 503 are similarly updated between fig. 5B and 5C.
Fig. 5D illustrates another update to the sound output and user interface 517. In this example, the second user interface object 501-b becomes larger and the volume of the corresponding second audio component 505 decreases, and the third user interface object 501-c becomes larger and the volume of the corresponding third audio component 507 decreases. In addition, the first user interface object 501-a becomes larger and moves to the right, so the volume of the corresponding first audio component 503 decreases and the balance of the first audio component 503 shifts to the right.
Fig. 5E illustrates another update to the sound output and user interface 517. In this example, the second user interface object 501-b becomes larger and the volume of the corresponding second audio component 505 decreases, and the third user interface object 501-c becomes larger and the volume of the corresponding third audio component 507 decreases. However, the device has provided data to the display 450 to update the user interface 517, including ceasing to display the first user interface object 501-a (e.g., by causing the first user interface object 501-a to move/slide out of the display 450 and/or fade out). In combination, the device has provided data to the audio system to update the sound output, including ceasing to provide the first audio component 503 corresponding to the first user interface object 501-a.
Fig. 5F illustrates the user interface 517 at a later time. In FIG. 5F, the fourth user interface object 501-d and the fifth user interface object 501-e are mobile. In combination, the audio component 509 and the audio component 511 will be shifted in their respective directions in accordance with the movements of the fourth user interface object 501-d and the fifth user interface object 501-e. In fig. 5F, the device also detects a user input 513 on a corresponding button of the remote control 5001 (e.g., on menu button 5002). In response to detecting the user input 513, the device provides sound information to the audio system for changing the audio component 509 and the audio component 511 (e.g., by halting the audio component 509 and the audio component 511), as shown in fig. 5G. The device also provides data to the display 450 for updating the user interface 517 and displaying one or more control user interface objects (e.g., application icons 532-a through 532-e and movie icons 534-a through 534-c). In some embodiments, the fourth user interface object 501-d and the fifth user interface object 501-e continue to be displayed with the control user interface object. For example, the fourth user interface object 501-d and the fifth user interface object 501-e are displayed lower in the z-direction than the control user interface object such that the fourth user interface object 501-d and the fifth user interface object 501-e are overlapped by the control user interface object, as shown in FIG. 5G.
Fig. 5H illustrates an audio envelope 515 in accordance with some embodiments. The vertical axis of the audio envelope 515 represents amplitude (volume) and the horizontal axis represents time t at user input 0 Time of start. The audio envelope 515 is included at t 0 And t 1 A attack period A (in which the amplitude increases with time) between times, at t 1 And t 2 A decay (decay) period D (where the amplitude decreases with time), at t 2 And t 3 A maintenance period S (in which the amplitude remains constant over time) in between and at t 3 And t 4 A release period R in between (wherein the amplitude decreases exponentially/asymptotically with time). At time t 4 After that, the sound output corresponding to the user input is stopped. In some embodiments, the audio envelope 515 does not include the decay period D and/or the sustain period S.
In some embodiments, the corresponding audio components provided by the audio system have an audio envelope similar to the audio envelope 515 shown in fig. 5H. In response to detecting a user input (e.g., user input 513 in fig. 5F), the electronic device provides sound information to the audio system for changing the corresponding audio component. In some embodiments, one or more aspects of the audio envelope are modified (e.g., the onset of the corresponding audio component is increased) in response to detecting the user input.
Fig. 5I-5S illustrate user interfaces that provide audio feedback when a user manipulates a control object (e.g., a slider on a slider bar or knob) in the user interface, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 7A-7D.
Fig. 5I illustrates a display 450 and a remote control 5001, both of which are in communication with an electronic device that performs certain operations described below. In some embodiments, remote control 5001 has a touch-sensitive surface 451. In some embodiments, remote control 5001 also has one or more buttons or available items (afordance), such as menu button 5002, microphone button 5003, play/pause button 5004, watch list button 5005, volume up button 5006, and/or volume down button 5007. In some embodiments, menu buttons 5002 or similar affordances allow a home screen user interface to be displayed on display 450. In some embodiments, a microphone button 5003 or similar affordance allows a user to provide verbal commands or voice entries to the electronic device. In some embodiments, the play/pause button 5004 is used to play or pause audio or video media depicted on the display 450. In some embodiments, the watch list button 5005 allows a watch list user interface to be displayed on the display 450. In some embodiments, the watch list user interface provides a user with a plurality of audio/video media items that are played using the electronic device.
Fig. 5I illustrates a video playback view 500 displayed on a display 450. Video playback view 500 is a user interface that provides for the display of media items (e.g., movies or television programs). In some cases, the display of the media item is in a paused or played state. In some embodiments, the video playback view 500 provides for the display of video information associated with navigation of media items. Fig. 5I illustrates an open caption of a movie displayed during normal playback. In some embodiments, although the media item is in a paused or playing state, user input 502 (e.g., a light touch contact) is detected on the touch-sensitive surface 451.
Fig. 5J illustrates that in some embodiments, in response to receiving user input 502, the electronic device provides data (e.g., a video playback view user interface) to the display 450 for providing a plurality of user interface objects on the video playback view 500. The plurality of user interface objects includes a slider 504 (also sometimes referred to as a playhead) on a navigation slider 506 (also sometimes referred to as a rub bar). Slider 504 is an example of a control user interface object configured to control a parameter (e.g., current position/time within navigation slider 506 of a timeline representing the total duration of a displayed media item). The plurality of user interface objects also includes a volume control user interface object 508 (e.g., an audio control user interface object that indicates the volume of sound output by the audio system).
In fig. 5J, slider 504 is represented as a square, which provides a visual indication that the user's current focus of interaction with video playback view 500 is not on slider 504. For comparison, in FIG. 5K, slider 504 is represented as a circle with video preview 510, which provides a visual indication of the user's current focus of interaction with video playback view 500 on slider 504. In some embodiments, video preview 510 displays a preview image of a position within the media item that corresponds to the position of slider 504 within slider bar 506. As shown in subsequent figures (e.g., fig. 5L), in some embodiments, slider 504 (having a circular shape) deforms as the user interacts with slider 504.
FIG. 5K illustrates remote control 5001 detecting user input 512 beginning at location 512-1 and ending at location 512-2 (FIG. 5L), which is an interaction for dragging the position of slider 504 within slider 506.
In some embodiments, remote control 5001 detects user input described herein and communicates information about the user input to the electronic device. When information about the user input is communicated to the electronic device, the electronic device receives the user input. In some embodiments, the electronic device directly receives user input (e.g., detects user input on a touch-sensitive surface integrated with the electronic device).
In some embodiments, the electronic device determines that the user input 512 is an interaction for: the position of slider 504 within slider 506 is adjusted when user input 512 meets a predetermined criterion, such as an increase in the contact intensity of the user input is detected by remote control 5001 when the current focus is on slider 504. For example, in FIG. 5K, user input 512 is a detected signal having a value greater than a tap threshold IT when the current focus is on slider 504 L Is a drag gesture of intensity.
User input 512 drags slider 504 from position 504-1 (fig. 5K) to position 504 (fig. 5L) on display 450. Thus, when the electronic device receives user input 512 (e.g., concurrently with user input 512, continuously with user input 512, and/or in response to user input 512), the electronic device provides data to display 450 to move slider 504 so that the user appears to drag slider 504 in real-time.
Upon receiving user input 512 (e.g., concurrently with user input 512, continuously with user input 512, and/or in response to user input 512), the electronic device also provides sound information (represented in audio diagrams 516 of fig. 5K-5L) for providing sound output 514. In some embodiments, the sound output 514 is an audio feedback corresponding to the drag of the slider 504 (e.g., the sound output 514 has one or more characteristics that vary according to the drag of the slider 504 from position 504-1 to position 504-2). For example, an arrow drawn from the sound output 514 in the audio map 516 corresponds to the drag of the slider 504 from position 504-1 to position 504-2 and indicates that the sound output 514 is provided concurrently and/or continuously with the user input 512. In addition, the arrow drawn according to the sound output 514 indicates the manner in which the sound output 514 changes according to the movement of the slider 504 (e.g., the stereo balance of the sound output 514 shifts to the right), as described below.
In this example, the audio system includes two or more speakers, including a left speaker and a right speaker. One or more characteristics of the sound output 514 include a balance (e.g., a ratio of sound output intensities) between the left speaker and the right speaker (represented on the horizontal axis of the audio map 516). In some embodiments, the one or more characteristics further include a pitch of the sound output 514 (represented in the audio map 516 in a vertical position of the sound output 514). In some embodiments, the sound output 514 has only a single characteristic (e.g., such as pitch or balance) based on the position or movement of the user input 512. In this example, the direction and magnitude of the arrow drawn from the sound output 514 in the audio map 516 indicates how the pitch and balance change according to the drag of the slider 504 from position 504-1 to position 504-2. Thus, as slider 504 moves rightward from position 504-1 to position 504-2, the balance of sound output 514 shifts rightward, which gives the audio impression of the user moving rightward. The pitch of the sound output 514 also shifts higher during the rightward movement of the slider 504, intuitively giving the impression that the user moves "higher" in the time represented by the slider bar 506. Alternatively, in some embodiments, the pitch shifts lower during the rightward movement of slider 504.
Fig. 5M to 5N are similar to fig. 5K to 5L. However, in fig. 5M-5N, remote control 5001 detects user input 518 that is otherwise similar to user input 512 but with a greater speed. Like user input 512, user input 518 begins dragging slider 504 at location 504-1. But because user input 518 has a greater velocity, it drags slider 504 farther to position 504-3 than user input 512. Upon receiving user input 518 (e.g., concurrently with user input 518, continuously with user input 518, and/or in response to user input 518), the electronic device provides sound information (represented in audio map 516 of fig. 5M-5N) for providing sound output 520.
In some embodiments, the electronic device provides various audio and visual indicia to the user indicating the speed of the corresponding user input. For example, the volume of sound output 514 is based on the speed of movement of slider 504 from position 504-1 to position 504-2 (or the speed of user input 512 from positions 512-1 to 512-2), as shown in FIGS. 5K-5L; and the volume of sound output 520 is based on the speed of movement of slider 504 from position 504-1 to position 504-3 (or the speed of user input 518 from positions 518-1 to 518-2). In the audio map 516 (fig. 5K-5N), the volume of each respective sound output is depicted by the size of the circle representing the respective sound output. As can be seen from a comparison of the acoustic output 514 (fig. 5K-5L) and the acoustic output 520 (fig. 5M-5N), the faster user input 518 (fig. 5M-5N) results in a higher acoustic output 520 than the slower user input 512 (fig. 5K-5L) and the lower acoustic output 514.
In some embodiments, the electronic device visually distinguishes between sliders 504 based on movement of sliders 504 (e.g., speed or position) or based on movement of user inputs 512/518 (e.g., speed and/or position). For example, as shown in fig. 5L and 5N, slider 504 is shown with a tail (e.g., slider 504 is lengthened/stretched) based on the speed and/or direction of user input. Since both user input 512 and user input 518 drag slider 504 to the right, slider 504 stretches to the left in both examples (e.g., to resemble a comet moving to the right). But because user input 518 is faster than user input 512, slider 504 stretches more as a result of user input 518 (FIG. 5N) than as a result of user input 512 (FIG. 5L).
Fig. 5O-5P illustrate the continuation of user input 518 from position 518-2 (fig. 5O) to 518-3 (fig. 5P), which drags slider 504 from position 504-3 (fig. 5O) near the middle of slider 506 to position 504-4 (fig. 5P) corresponding to the end of slider 506. As described above, when a continuation of the user input 518 is received (e.g., concurrently with the user input 518, continuously with the user input 518, and/or in response to the user input 518), the electronic device provides sound information (represented in the audio map 516 in fig. 5O-5P) for providing a continuation of the sound output 520 until the slider 504 reaches the position 504-4 (or shortly before the slider 504 reaches the position 504-4). In some embodiments, the electronic device provides sound information to the audio system to provide a sound output 522 to indicate that slider 504 is located at the end of slider 506 (e.g., sound output 522 is a reverberant "Bo" sound that indicates that slider 504 has "collided" with the end of slider 506). The sound output 522 is different (e.g., temporally or audibly) from the sound outputs 514 and 520. In some embodiments, sound output 522 does not have one or more characteristics based on user input 518 (e.g., whenever slider 504 collides with the end point of slider 506, the audio system provides the same sound regardless of the characteristics (such as speed) of the user input that caused slider 504 to collide with the end point of slider 506). Alternatively, in some embodiments, the volume of sound output 522 is based on the speed of user input 518 when user input 518 reaches the end of slider bar 506 (e.g., a faster collision with the end of slider bar 506 results in a loud reverberant "bang" sound). In some embodiments, once the end of the slider 506 is reached, an animation of the slider 504 squeaking against the end of the slider 506 is displayed. Thus, in some embodiments, the electronic device provides discrete (e.g., not continuous) audio and visual feedback regarding certain user interface navigation events (e.g., control user interface objects (such as sliders), reach the end of their control range (such as the end of sliders)).
Fig. 5Q shows a graphic 524 illustrating how the electronic device dynamically and smoothly provides audio feedback to a user to assist in manipulation of a control user interface object (e.g., a slider on a slider bar). In some embodiments, one or more characteristics (e.g., balance, pitch, and/or volume) of the sound output 514/520 are updated multiple times per second (e.g., 10, 20, 30, or 60 times per second). For example, in some embodiments, the speed of the user input is calculated 60 times per second based on the difference between the current location of the user input and the previous location of the user input (e.g., 1/60 second previously measured), and the volume of the corresponding sound output is determined 60 times per second based on the speed. Thus, the diagram 524 illustrates providing the sound output 514/520 continuously and concurrently with the user input 512/518. Based on the position of the user input 512/518 (or the position of the slider 504, as described above), the pitch and balance of the sound output 514/520 is determined perceptually in real time. The volume of the sound output 514/520 is determined perceptually in real time as the position (e.g., speed) of the user input 512/518 changes.
In some embodiments, the pitch and balance of the sound output 514/520 is determined based on a change in the position (e.g., speed) of the user input 512/518. In some embodiments, the volume of sound output 514/520 is based on the position of user input 512/518 (or the position of slider 504, as described above).
Similarly, in some embodiments, the visual characteristics (e.g., elongation/stretch) of slider 504 are updated multiple times per second (e.g., 10 times per second, 20 times per second, 30 times per second, or 60 times per second). Thus, for example, based on the speed of the user input, the length of the tail of slider 504 is updated 60 times per second, as described above.
Fig. 5R-5S are largely similar to fig. 5K-5L, but illustrate an embodiment in which the pitch of a continuously provided sound output is proportional to a change or movement of a user input. Fig. 5R-5S illustrate that remote control 5001 detects user input 526 beginning at 526-1 and ending at location 526-2 (fig. 5S), which is an interaction for dragging the position of slider 504 from location 504-1 to location 504-2 within slider 506. Thus, when the remote control 5001 detects the user input 526 (e.g., concurrently with the user input 526, continuously with the user input 526, and/or in response to the user input 526), the electronic device provides data to the display 450 for moving the slider 504 so that the user appears to drag the slider 504 in real-time. Upon receiving the user input 526 (e.g., concurrently with the user input 526, continuously with the user input 526, and/or in response to the user input 526), the electronic device also provides sound information (depicted in the audio graphs 516 in fig. 5R-5S) to the audio system for providing the sound output 528. The difference between the sound output 528 (fig. 5R-5S) and the sound output 514 (fig. 5K-5L) is that the pitch of the sound output 528 is independent of the movement (e.g., speed, or change in position) of the user input 526, while the pitch of the sound output 514 varies with the position of the user input 512. In some embodiments, the respective sound outputs have a balance based on the direction of movement of the user input (or the direction of movement of the slider 504) (e.g., movement to the left has a left balance and movement to the right has a right balance, regardless of the position of the slider 504).
Fig. 5T-5 HH illustrate user interfaces that provide audio feedback when a user navigates over discrete user interface objects (e.g., icons) in the user interface, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 8A-8C.
Fig. 5T illustrates a home screen user interface 530 displayed on display 450. The home screen user interface 530 includes a plurality of user interface objects, which in this example include application icons 532 (e.g., application icons 532-a through 532-e, each of which is a first type of user interface object) and movie icons 534 (e.g., movie icons 534-a through 534-c, each of which is a second type of user interface object). Also, in FIG. 5T, the current focus of home screen user interface 530 is on application icon 532-e, and application icon 532-e is visually distinguished from other user interface objects of the plurality of user interface objects (e.g., application icon 532-e is slightly larger than other application icons 532 and has a highlighted border) to indicate that the current focus is on application 532-e.
In fig. 5T, when home screen user interface 530 is displayed, the electronic device receives user input 536 on remote control 5001. The user input 536 (e.g., swipe gesture input) has an amplitude (e.g., a speed and/or distance represented by a length of an arrow off of the user input 536 of fig. 5T) and a direction (e.g., a direction of a user dragging his finger over the touch-sensitive surface 45 represented by a direction of an arrow off of the user input 536 of fig. 5T). The user input 536 is a request to move the current focus of the home screen user interface 530 from the application icon 532-e to the application icon 532-d.
FIG. 5U illustrates that the current focus has been moved from application icon 532-e to application icon 532-d in response to user input 536 (FIG. 5T). In FIG. 5U, the application icon 532-d is visually distinguished from other user interface objects of the plurality of user interface objects to indicate that the current focus is on the application icon 532-d.
FIG. 5U also illustrates an audio map 516 that shows representations of sound outputs (e.g., sound output 538-1 and optional sound output 540-1) provided by the audio system that correspond to movement of the current focus from the application icon 532-e to the application icon 532-d. The horizontal axis on the audio map 516 represents stereo balance of the audio components (e.g., left/right distribution in the audio map 516). The sound output 538-1 indicates that the current focus has moved to the application icon 532-d. Optionally, the audio system provides a sound output 540-1 indicating that the current focus has moved from the application icon 532-e. In some embodiments, the audio system provides the sound output 540-1 before providing the sound output 538-1. In some embodiments, the audio system provides the sound output 538-1 without providing the sound output 540-1.
The vertical axis of the audio map 516 represents the pitch of the sound outputs 538 and 540. In some embodiments, the pitch of the respective sound output (e.g., sound output 538-1 and/or sound output 540-1) is based on the size of the user interface object associated with the respective sound output (e.g., the user interface object on which the current focus is located). For example, the sound output 538-1 has a pitch based on the size of the application icon 532-d. As discussed below, in some embodiments, the sound output associated with the large user interface object (e.g., movie icon 534) has a lower pitch than the sound output associated with the small user interface object (e.g., application icon 532).
In some embodiments, the pitch of the corresponding sound output is based on the type of user interface object on which the current focus is located. For example, the sound output associated with the movie icon 534 has a low pitch and the sound output associated with the application icon 532 has a high pitch, regardless of the respective sizes of the application icon 532 and the movie icon 534.
In fig. 5U, the electronic device receives user input 542 on remote control 5001.
FIG. 5V shows that the current focus has been moved from application icon 532-d to application icon 532-c in response to user input 542 (FIG. 5U). In FIG. 5V, the application icon 532-c is visually distinguished from other user interface objects of the plurality of user interface objects to indicate that the current focus is on the application icon 532-c.
The audio map 516 in fig. 5V includes representations of sound outputs (e.g., sound output 538-2 and optional sound output 540-2) provided by the audio system corresponding to movement of the current focus from the application icon 532-d to the application icon 532-c. In some embodiments, in addition to the sound output 538-2, the audio system provides a sound output 540-2 that indicates that the current focus has moved from the application icon 532-d. In some embodiments, the audio system provides the sound output 540-2 before providing the sound output 538-2. In some embodiments, the audio system provides the sound output 538-2 without providing the sound output 540-2.
In fig. 5V, the electronic device receives user input 544 on remote control 5001.
FIG. 5W shows that the current focus has been moved from application icon 532-c to application icon 532-b in response to user input 544 (FIG. 5V). In FIG. 5W, the application icon 532-b is visually distinguished from other user interface objects of the plurality of user interface objects to indicate that the current focus is on the application icon 532-b.
The audio map 516 in fig. 5W includes representations of sound outputs (e.g., sound output 538-3 and optional sound output 540-3) provided by the audio system corresponding to movement of the current focus from the application icon 532-c to the application icon 532-b.
In fig. 5W, the electronic device receives user input 546 on remote control 5001. User input 546 has a higher magnitude (e.g., speed and/or distance) than user inputs 536 (fig. 5T), 542 (fig. 5U), and 544 (fig. 5V).
FIG. 5X illustrates that the current focus has been moved from application icon 532-b to application icon 532-e (via application icons 532-c and 532-d) in response to user input 546 (FIG. 5W).
The audio map 516 in FIG. 5X includes representations of sound outputs 538-4, 538-5, and 538-6 provided by the audio system corresponding to movement of a current focus from application icon 532-b to application icon 532-e through application icons 532-c and 532-d (e.g., sound output 538-4 corresponds to application icon 532-c, sound output 538-5 corresponds to application icon 532-d, and sound output 538-6 corresponds to application icon 532-e). Although sound outputs 538-4, 538-5, and 538-6 are shown together in the audio map 516, sound outputs 538-4, 538-5, and 538-6 are provided sequentially (e.g., sound output 538-4 is followed by sound output 538-5, which sound output 538-5 is followed by sound output 538-6).
The sound outputs 538-4, 538-5, and 538-6 have reduced volume (as represented by the smaller sized representation in the audio map 516 as compared to the representation of the sound output 538-3 in fig. 5W) to avoid loud repetitive sounds that reduce the user experience.
In fig. 5X, the electronic device receives user input 548 on remote control 5001. The user input 548 corresponds to a request to move the current focus from the application icon 532-e to an icon in a next row (e.g., a row of icons below the application icon 532-e).
FIG. 5Y illustrates that home screen user interface 530 has been curled in response to user input 548, which reveals icons 550-a through 550-d. In addition, in response to user input 548, the current focus has been moved from application icon 532-e to icon 550-d.
The audio map 516 in FIG. 5Y includes a representation of the sound output 538-7 provided by the audio system corresponding to the movement of the current focus from the application icon 532-e to the icon 550-d. The sound output 538-7 has a lower pitch than the sound outputs associated with the application icons 532 (e.g., sound outputs 538-1 through 538-6).
In fig. 5Y, the electronic device receives user input 552 on remote control 5001. The user input 552 corresponds to a request to move the current focus from the icon 550-d to an icon in a row of icons above the icon 550-d (e.g., a row of application icons 532).
Fig. 5Z illustrates home screen user interface 530 having been curled back in response to user input 552. In addition, in response to user input 552, the current focus has been moved from icon 550-d to application icon 532-e.
The audio map 516 in FIG. 5Z includes a representation of the sound output 538-8 provided by the audio system corresponding to the movement of the current focus from the icon 550-d to the application icon 532-e.
In fig. 5Z, the electronic device receives user input 554 (e.g., a tap gesture) on the remote control 5001. The user input 552 corresponds to a request for activating an application icon 532-e (or a corresponding application).
Fig. 5AA illustrates a user interface 594 displaying a gaming application (e.g., a table tennis game application) in response to user input 554.
The audio map 516 in FIG. 5AA includes a representation of the sound output 556-1 indicating that the application icon 532-e (FIG. 5Z) has been activated.
Fig. 5AA also illustrates the electronic device receiving user input 558 (e.g., button presses) on menu button 5002 of remote control 5001.
Fig. 5BB illustrates displaying home screen user interface 530 in response to user input 558 (fig. 5 AA).
The audio map 516 in FIG. 5BB includes a representation of the sound output 560-1, which indicates that the user interface of the gaming application is replaced with the home screen user interface 530.
Fig. 5BB also illustrates that the electronic device receives user input 562 on the remote control 5001. The user input 562 corresponds to a request for moving the current focus from the application icon 532-e to an icon in a row of icons above the application icon 532-e (e.g., a row of movie icons 534).
FIG. 5CC illustrates that the current focus has been moved from application icon 532-e to movie icon 534-c in response to user input 562 (FIG. 5 BB).
The audio map 516 in FIG. 5CC includes a representation of the sound output 538-9 corresponding to the movement of the current focus from the application icon 532-e to the movie icon 534-c.
Fig. 5CC also illustrates that the electronic device receives user input 564 on the remote 5001.
FIG. 5DD illustrates that the current focus has been moved from movie icon 534-c to movie icon 534-b in response to user input 564 (FIG. 5 CC).
The audio map 516 in fig. 5DD includes a representation of the sound output 538-10 corresponding to the movement of the current focus from movie icon 534-c to movie icon 534-b.
Fig. 5DD also illustrates the electronic device receiving user input 566 on remote 5001.
Fig. 5EE illustrates that the current focus has been moved from movie icon 534-b to movie icon 534-a in response to user input 566 (fig. 5 DD).
The audio map 516 in fig. 5EE includes a representation of the sound output 538-11 corresponding to the movement of the current focus from movie icon 534-b to movie icon 534-a.
The figure 5EE also illustrates that the electronic device receives user input 568 (e.g., a tap gesture) on the remote control 5001.
FIG. 5FF illustrates a display of a product page view 572 in response to user input 568 (FIG. 5 EE).
The audio map 516 in fig. 5FF includes a representation of the sound output 556-2 indicating that the movie icon 534-a (fig. 5 EE) has been activated.
Fig. 5FF also illustrates that the electronic device receives user input 570 (e.g., button presses) on menu button 5002 of remote control 5001.
Fig. 5GG illustrates a home screen user interface 530 displayed in response to user input 570 (fig. 5 FF).
The audio map 516 in fig. 5GG includes a representation of the sound output 560-2 that indicates that the user interface of the product page view 572 is replaced with a home screen user interface 530.
Fig. 5GG also illustrates that the electronic device receives user input 574 (e.g., button presses) on a menu button 5002 of the remote control 5001.
Fig. 5HH illustrates the display of a screen saver user interface 517 in response to user input 574 (fig. 5 GG).
Audio map 516 in HH of FIG. 5 includes a representation of sound output 560-3 that indicates that home screen user interface 530 is replaced with a screensaver user interface 517. In some embodiments, the screensaver user interface 517 is subsequently updated in the absence of user input, as illustrated in fig. 5A-5E.
In some embodiments, when the screensaver user interface 517 is displayed on the display 450, a user input (e.g., a button press on a button of the remote control 5001 or a tap gesture on the touch-sensitive surface 451) initiates replacement of the screensaver user interface 517 with the home screen user interface 530.
In some embodiments, as illustrated in fig. 5T-5Z and 5 BB-5 GG, the home screen user interface 530 is a video selection user interface that includes, among other things, a representation of a plurality of media items (e.g., movie icons 534). In some embodiments, user input selecting a particular movie item (e.g., user input 568 in FIG. 5 EE) results in the display of a product page view 572 (FIG. 5 II) that includes descriptive information 576 about the corresponding movie. Thus, in some embodiments, fig. 5II is a starting point for the functionality described below with reference to fig. 5 JJ-5 MM.
Fig. 5 II-5 MM illustrate operations associated with a product page view according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 9A-9C.
Fig. 5II illustrates the display of a product page view 572. The product page view 572 includes descriptive information 576 about the media item (e.g., video corresponding to movie icon 534-a of FIG. 5 AA), such as title 576-a, runtime 576-b, episode summary 576-c, ratings 576-d, and availability for playing the media item. While the product page view 572 is shown, the electronic device also provides sound information to the audio system to provide a first sound output (e.g., sound output) based on the media item. In some embodiments, the first sound output is based on a category of the media item. For example, "The Great climbs" is classified as an exciting documentary, and The first sound output includes a winning orchestra. In some embodiments, the first sound output includes a track from a sound track of the media item. For example, when the user has not yet started watching the media item, the first sound output includes a representative track preselected to represent the overall feel of the movie. In some embodiments, the first sound output includes tracks of trailers that are also used for the media item when the user has not yet started viewing the media item. In some embodiments, the first sound output corresponds to a tone of the first scene or a sound track used for the open captioning when the user has not yet started viewing the media item.
Fig. 5II illustrates remote control 5001 detecting an input 578 (e.g., user input such as a button press on play/pause button 5004) corresponding to a request for playback of a media item. Fig. 5JJ illustrates an electronic device responding to a request for playback of a media item by providing information to a display to play back the media item (e.g., by displaying a video playback user interface 500 described below).
Fig. 5JJ illustrates that in response to receiving a user input 578 corresponding to a request for playback of a media item, the electronic device provides data for playback of the media item to a display.
Fig. 5JJ also illustrates the electronic device receiving user input 580 (e.g., button presses) on menu button 5002 of remote control 5001.
Fig. 5KK illustrates the display of a product page view 572 in response to user input 580 (fig. 5 JJ).
In some embodiments, the product page view 572 displayed in response to the user input 580 (fig. 5 KK) is different from the product page view 572 displayed prior to playback of the media item (fig. 5 II). For example, the product page view 572 in FIG. 5KK includes one or more selected still images 582 from the media item. In some embodiments, the one or more selected still images 582 are based on playback position. In some embodiments, the one or more selected still images 582 are different from the frame or pause image of the playback position. For example, as can be seen from fig. 5JJ, the user pauses the media item just before the goat reaches the mountain top. However, as shown in fig. 5KK, the selected still image 582 is an image of a goat at the top of the mountain. In some embodiments, the still image 582 is a pre-selected image of a scene for a playback position. In this way, the still image 582 may be selected as a more representative scene and may avoid, for example, the actor's face appearing embarrassing when pausing the media item at an untimely time. Alternatively, in some embodiments, the product page view 572 displayed in response to the user input 580 is the same as the product page view 572 displayed prior to playback of the media item (fig. 5 II).
The electronic device also provides sound information to the audio system to provide a sound output corresponding to the media item during presentation of the media item information user interface by the display. For example, when the display shows the product page view 572 (fig. 5 KK), the audio system plays a track from a different sound track than the track that was played when the product page view 572 was previously shown (e.g., in fig. 5II, before the user began viewing the media item). In some embodiments, the track is a track that corresponds to a playback position (e.g., a position where a user pauses the media item or stops viewing the media item). In some embodiments, the sound output is not directed to a portion of the sound track of the media item, but rather is based on one or more characteristics of the media item at the playback location. For example, when the playback position is in a dark scene in a movie (e.g., based on color analysis of the displayed colors in the scene), the second sound output is "dark" music.
Fig. 5LL illustrates displaying an end subtitle for a media item in the video playback user interface 500. Fig. 5LL also illustrates that the electronic device receives user input 582 (e.g., button presses) on a menu button 5002 of the remote control 5001 when displaying end captioning.
Fig. 5MM illustrates the display of a product page view 572 in response to user input 582 (fig. 5 LL). In some embodiments, the product page view 572 in fig. 5MM includes one or more selected still images 584 from the media items (e.g., this shows an end subtitle).
Fig. 5 NN-5 SS illustrate operations associated with a pause state of a video according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 10A-10B.
Fig. 5NN illustrates a video playback view 500 during playback of a media item. Fig. 5NN also illustrates that user input 586 (e.g., button presses) is detected on play/pause button 5004.
Fig. 5OO illustrates media items displayed in video playback view 500 during an exemplary pause mode or during a pause state (e.g., in response to detecting user input 586). In some embodiments, during this exemplary pause mode, a countdown clock 588 is displayed on media items displayed in the video playback view 500 (e.g., on still images or frames representing points in the video where the video was paused). In some embodiments, the countdown clock 588 is translucent or partially transparent. In some embodiments, although the media item is displayed during the pause mode, one or more still images 590 are displayed superimposed over the media item. In some embodiments, the still image 590 includes representative frames selected from a predetermined time interval prior to a point in the media item in which the media item was paused. For example, the still image 590 includes four frames from a drama or scene of interest within five minutes of the movie before the current pause point in playback of the movie.
Fig. 5PP illustrates that in some embodiments, the display of the countdown clock 588 includes a display of an animation (e.g., a screen saver or a slide) corresponding to a predetermined time interval before another exemplary pause state or another exemplary representation of a pause state is displayed. In some embodiments, if a user input is detected before a predetermined time interval represented by the countdown clock 588 has elapsed, playback of the media item resumes, however in some embodiments, advancement of the countdown clock 588 is paused (e.g., for a predetermined time or indefinitely) until another user input corresponding to a request to resume playback of the media item is detected.
Fig. 5QQ illustrates an exemplary representation of the countdown clock 588 after its associated predetermined time interval has completely elapsed (e.g., the ring tone is full, the countdown clock 588 has reached 100% opacity, the countdown clock 588 has grown to a certain size, etc.). In some embodiments, after a predetermined time interval has elapsed, an animation or transition is displayed before another exemplary representation of the paused state of the media item is displayed.
Fig. 5RR illustrates another exemplary representation of a pause state of a media item. In some embodiments, a slide or screen saver of still images is displayed, corresponding to a point in the media item where the media item was paused. For example, a slide show or screen saver that includes ten still images from the first three to five minutes before a pause point in the movie is displayed (e.g., randomly or in a circular order). In some embodiments, when a slide or screen saver is displayed, one or more pause state elements are displayed, such as a current time, a state indicator (e.g., a blinking pause symbol), media information, and/or an end time indicator. In some embodiments, the still image is a representative frame of a media item pre-selected for a slide show or screen saver (e.g., pre-selected by a movie director). In some embodiments, the still image is a frame automatically extracted from the media item.
Fig. 5SS illustrates elements that modify or update one or more suspension states over time in some embodiments. For example, the current time shows that the time is now 8:00PM (instead of 7:41PM, as shown in FIG. 5 RR), and the end time indicator has also been updated to indicate that the paused media item will be at 8:53PM (instead of at 8:34PM, as shown in FIG. 5 RR) ends playback. Fig. 5SS also illustrates detection of an exemplary user input 592 (e.g., a button press) on menu button 5002.
In some embodiments, product page view 572 (e.g., FIG. 5 KK) is displayed in response to user input 592.
Fig. 6A-6C illustrate a flowchart of a method 600 of changing visual characteristics of a user interface in connection with changing audio components corresponding to user interface objects, according to some embodiments. Method 600 is performed at an electronic device (e.g., device 300 of fig. 3 or portable multifunction device 100 of fig. 1A) in communication with a display and an audio system.
In some embodiments, the audio system includes a digital-to-analog converter. In some embodiments, the audio system includes a signal amplifier. In some embodiments, the audio system is coupled with one or more speakers. In some embodiments, the audio system is coupled to a plurality of speakers. In some embodiments, the audio system includes one or more speakers. In some embodiments, the audio system is integrated with a display (e.g., a television with audio processing circuitry and speakers). In some embodiments, the audio system is distinct and separate from the display (e.g., the display screen and the separate audio system).
Some of the operations in method 600 are optionally combined and/or the order of some of the operations is optionally changed. In some embodiments, the user interfaces in fig. 5A-5G are used to illustrate the process described with respect to method 600.
As described below, the method 600 includes providing a sound output for a screen saver user interface. The method reduces the cognitive burden on the user when interacting with user interface objects (e.g., control user interface objects), thereby creating a more efficient human-machine interface. By providing additional information (e.g., indicating the status of the screensaver), unnecessary operations (e.g., interacting with the device to check the status of the device) may be avoided or reduced. Providing a sound output helps the user interact with the device more efficiently and reduces unnecessary operations saving power.
The device provides (602) data for presenting a user interface generated by the device to a display. In some embodiments, the user interface is automatically generated by the device. The user interface includes a first user interface object having a first visual characteristic. The user interface also includes a second user interface object having a second visual characteristic that is different from the first user interface object. For example, a device (e.g., device 300 of fig. 3 or portable multifunction device 100 of fig. 1A) automatically generates a graphical user interface that includes a first user interface object having a first visual characteristic and a second user interface object having a second visual characteristic. The device sends data to a display (e.g., display 450) for use by the display to display, show, or otherwise present a graphical user interface (e.g., user interface 517 with first user interface object 501-a (first bubble) and second user interface object 501-B (second bubble), as shown in fig. 5B). In FIG. 5B, the first user interface object 501-a and the second user interface object 501-B have different visual characteristics on the display, such as different sizes and different locations.
In some embodiments, the first visual characteristic includes (604) a size and/or a location of the first user interface object. In some embodiments, the second visual characteristic includes a size and/or a location of the second user interface object. For example, as explained above, the first user interface object 501-a and the second user interface object 501-B in FIG. 5B have different sizes and different locations on the display.
In some embodiments, a first visual characteristic of the first user interface object and a second visual characteristic of the second user interface object are determined (606) independent of the user input. For example, a first visual characteristic of a first user interface object and a second visual characteristic of a second user interface object may be initially determined independent of user input. In some embodiments, the first user interface object and the second user interface object are pseudo-randomly generated. For example, the direction, speed, location, and/or size of movement of the corresponding user interface object is determined pseudo-randomly. In some embodiments, all user interface objects in the user interface are generated independently of user input. In some embodiments, the changes to the first user interface object and the second user interface object are determined pseudo-randomly.
The device provides (608) sound information for providing a sound output to the audio system. The sound output includes a first audio component corresponding to a first user interface object. The sound output further includes a second audio component corresponding to the second user interface object and different from the first audio component. For example, the first audio component may be a first tone and the second audio component may be a second tone, where each tone has one or more auditory properties such as pitch, timbre, volume, attack, maintenance, decay, and the like. In some embodiments, the sound output includes a third audio component independent of the first user interface object and the second user interface object. In some embodiments, the third audio component is independent of any user interface object in the user interface.
In some embodiments, a second audio component is selected (610) based at least in part on the first audio component. In some embodiments, the pitch of the second audio component is selected based on the pitch of the first audio component. For example, when the first audio component and any other audio component (if any) that is output concurrently with the first audio component have a pitch (or note) of a particular chord (e.g., a minor chord), the second audio component is selected to have a pitch (or note) of the particular chord.
In some embodiments, the user interface includes a plurality of user interface objects, and the sound output has respective audio components corresponding to respective ones of the plurality of user interface objects. In some embodiments, the sound output has at least one audio component (e.g., a reference tone or melody) that is independent of the plurality of user interface objects.
When the user interface is presented on the display and the sound output is provided, the device provides (612 of fig. 6B) data to the display for updating the user interface and provides sound information to the audio system for updating the sound output. Updating the user interface and updating the sound output includes: changing at least one of the first visual characteristics (e.g., size and/or position) of the first user interface object in connection with (e.g., concurrently with) changing the first audio component corresponding to the first user interface object; and altering at least one of the second visual characteristics (e.g., size and/or position) of the second user interface object in conjunction with (e.g., concurrently with) altering the second audio component corresponding to the second user interface object. For example, the device sends data to a display (e.g., display 450 of fig. 5A-5G) that is used by the display to update the graphical user interface (e.g., by moving the first user interface object 501-a and the second user interface object 501-B between their respective positions in the user interface 517, as shown in fig. 5B and 5C). Note that the size and location of the first user interface object 501-a has changed between fig. 5B and 5C. Similarly, the size of the second user interface object 501-B has changed between FIG. 5B and FIG. 5C. The change in the visual characteristic of the first user interface object 501-a occurs in conjunction with a change in the audio component corresponding to the first user interface object 501-a (e.g., as represented by the changed first audio component 503 in fig. 5B and 5C). For example, the sound corresponding to the first user interface object 501-a changes as the first bubble expands and moves on the display. Similarly, a change in the visual characteristics of the second user interface object 501-b occurs in conjunction with a change in the audio component corresponding to the second user interface object 501-b. For example, the sound corresponding to the second user interface object 501-b changes as the second bubble expands on the display.
Providing data for updating the user interface occurs independent of user input. In some embodiments, providing sound information for updating the sound output occurs independent of user input. For example, the displayed user interface and corresponding sound are automatically updated without user input. In some embodiments, the displayed user interface and corresponding sound are updated whenever no user input is detected (e.g., the user interface generated by the device having the first user interface object and the second user interface object is a screen saver user interface, and the screen saver continues to update whenever no button is pressed on the remote control, no contact is detected on the touch sensitive surface of the remote control, etc.). In some embodiments, after updating the displayed user interface and corresponding sound when no user input is detected, the user input is detected and in response the device stops providing data for updating the user interface and stops providing sound information for updating the sound output. Instead, the device provides data to the display to present a second user interface (e.g., a user interface that is displayed just prior to displaying a screensaver user interface (such as screensaver user interface 517 shown in fig. 5A-5F) with the first and second user interface objects generated by the device).
In some embodiments, a first audio component corresponding to the first user interface object is changed (614) in accordance with a change to at least one visual characteristic of the first user interface object. For example, after determining a change to (at least one of) the first visual characteristics of the first user interface object, a change to the first audio component is determined based on the change to the first visual characteristics of the first user interface object. In some embodiments, the second audio component corresponding to the second user interface object is changed in accordance with a change to at least one visual characteristic of the second user interface object. For example, after determining the change to the second visual characteristic of the second user interface object, a change to the second audio component is determined based on the change to the second visual characteristic of the second user interface object.
In some embodiments, the audio component corresponding to the respective user interface object (e.g., the first user interface object) is changed in accordance with the change to the respective user interface object (e.g., the change to the at least one visual characteristic of the first user interface object) independent of the change to the other user interface object (e.g., the change to the at least one visual characteristic of the second user interface object). For example, the audio component corresponding to the respective user interface object changes based only on the change of the respective user interface object.
In some embodiments, the audio components corresponding to the plurality of user interface objects (including the respective user interface object) are changed according to a change to the respective user interface object (e.g., a change to at least one visual characteristic of the first user interface object). For example, when a respective user interface object appears in the user interface, the volume of the audio component corresponding to the plurality of user interface objects (except for the respective user interface object) is reduced.
In some embodiments, at least one visual characteristic of the first user interface object is changed (616) in accordance with the change to the first audio component. For example, after determining the change to the first audio component, a change to a first visual characteristic of the first user interface object is determined. In some embodiments, at least one visual characteristic of the second user interface object is changed in accordance with the change to the second audio component.
In some embodiments, updating the user interface and updating the sound output further includes (618) ceasing to display the first user interface object and ceasing to provide the sound output including the first audio component corresponding to the first user interface object (e.g., the first user interface object expands, fades out, and disappears from the user interface, as shown in fig. 5E); stopping displaying the second user interface object and stopping providing a sound output comprising a second audio component corresponding to the second user interface object (e.g., the second user interface object expands, fades out, and disappears from the user interface); and/or displaying one or more respective user interface objects and providing a sound output including one or more respective audio components corresponding to the one or more respective user interface objects (e.g., displaying a user interface object that is different from the first user interface object and the second user interface object, as shown in fig. 5C).
In some embodiments, updating the sound output includes (620) determining whether a predetermined inactivity criterion is met (e.g., no user input has been received or the remote control is lowered for a predetermined period of time). In accordance with a determination that a predetermined inactivity criterion is met, the device changes the volume of the sound output. In some embodiments, changing the volume of the sound output includes increasing or decreasing the volume of the corresponding audio component.
In some embodiments, the pitch of the respective audio component corresponds to (622 of fig. 6C) the initial size of the corresponding user interface object (e.g., the pitch of the audio component 503 corresponds to the initial size of the user interface object 501-a in fig. 5A), the stereo balance of the respective audio component corresponds to the location of the corresponding user interface object on the display (e.g., the stereo balance of the audio component 503 corresponds to the location of the user interface object 501-a on the display 450 in fig. 5A), and/or the change in the volume of the respective audio component corresponds to a change in the size of the corresponding user interface object (e.g., the change in the volume of the audio component 503 corresponds to a change in the size of the user interface object 501-a in fig. 5B). In some embodiments, the volume of the respective audio component corresponds to the size of the corresponding user interface object (e.g., the volume decreases as the size of the corresponding user interface object increases, as shown in fig. 5A-5F, or alternatively, the volume increases as the size of the corresponding user interface object increases). In some embodiments, the audio component is pseudo-randomly generated. For example, the pitch, volume and/or stereo balance of the respective audio component is determined pseudo-randomly. Thus, the audio component is not part of the predetermined sequence of notes.
In some embodiments, the device detects (624) user input (e.g., detects pressing a button or picking up a remote control). In response to detecting the user input, the device provides sound information (e.g., decreases the volume and/or increases the onset of the respective audio component) to the audio system for changing the respective audio component corresponding to the respective user interface object. As used herein, onset refers to how hard a note is hit (e.g., the rate at which the amplitude of sound increases over time to its peak volume, as shown in fig. 5H). In response to detecting the user input, the device further provides data to the display for updating the user interface and displaying one or more control user interface objects (e.g., including (additional) control user interface objects such as buttons, icons, sliders, menus, etc. in the user interface, or replacing the user interface with a second user interface including one or more control user interface objects, as shown in fig. 5G).
In some embodiments, the sound information provided to the audio system includes (626) information for providing a sound output including audio components that are not harmonious with the respective audio components corresponding to the respective user interface objects. In some embodiments, the sound information provided to the audio system includes information for providing a sound output including audio components that are not harmonious with the respective audio components corresponding to the respective user interface objects. In some embodiments, the audio components that are not harmonious with the corresponding audio components have a preset (e.g., fixed) pitch.
In some embodiments, the first audio component and the second audio component are harmonic. In some embodiments, the respective audio components corresponding to the respective user interface objects are harmonic (e.g., the respective audio components have the pitch of a particular chord).
In some embodiments, prior to detecting a user input (e.g., a user picking up a remote control), the device provides (628) to the display for displaying the user interface and updating data of the user interface without providing sound information for providing sound output to the audio system. After detecting the user input, the device provides data for displaying the user interface and updating the user interface to the display and provides sound information for providing sound output and updating the sound output to the audio system (e.g., stops providing sound output, as illustrated by fig. 5G, or alternatively reduces the volume of sound output, etc.). In some embodiments, the first user interface object and the second user interface object move slower before the user input is detected than after the user input is detected.
It should be understood that the particular order in which the operations in fig. 6A-6C have been described is merely exemplary and is not intended to indicate that the described order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize various ways to reorder the operations described herein. Further, it should be noted that details regarding other methods described herein (e.g., methods 700, 800, 900, and 1000) are also applicable in a similar manner to method 600 described above with respect to fig. 6A-6C. For example, the user interface objects, user interfaces, and sound outputs described above with reference to method 600 optionally have one or more of the characteristics of the user interface objects, user interfaces, and sound outputs described herein with reference to other methods described herein (e.g., methods 700, 800, 900, and 1000). For the sake of brevity, these details are not repeated here.
Fig. 7A-7D are flowcharts illustrating methods 700 of providing sound information corresponding to a user's interaction with a user interface object, according to some embodiments. Method 700 is performed at an electronic device (e.g., device 300 of fig. 3 or portable multifunction device 100 of fig. 1A) in communication with a display and an audio system. In some embodiments, the electronic device communicates with a user input device (e.g., a remote user input device such as a remote control) having a touch-sensitive surface. In some embodiments, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface. In some embodiments, the user input device is integrated with the electronic device. In some embodiments, the user input device is separate from the electronic device.
In some embodiments, the audio system includes a digital-to-analog converter. In some embodiments, the audio system includes a signal amplifier. In some embodiments, the audio system includes one or more speakers. In some embodiments, the audio system is integrated with a display (e.g., a television with audio processing circuitry and speakers). In some embodiments, the audio system is distinct and separate from the display (e.g., the display screen and the separate audio system). In some embodiments, the device includes a touch-sensitive surface. In some embodiments, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface (e.g., the touch-sensitive surface is integrated with a remote control of a television).
Some of the operations in method 700 are optionally combined and/or the order of some of the operations is optionally changed. In some embodiments, the user interfaces in fig. 5I-5S are used to illustrate the process described with respect to method 700.
As described below, the method 700 provides a sound output corresponding to a user's interaction with a user interface object. The method reduces the cognitive burden on the user when interacting with user interface objects (e.g., control user interface objects), thereby creating a more efficient human-machine interface. Providing a sound output helps a user to manipulate user interface objects faster and more efficiently, thereby conserving power.
The device provides (702) data for presenting a user interface having a plurality of user interface objects, including a control user interface object (e.g., a slider of a slider bar, etc.) at a first location on the display. The control user interface object is configured to control a corresponding parameter (e.g., a current position in the navigation slider). In some embodiments, the control user interface object is not an audio control user interface object. For example, the control user interface object is a slider (e.g., a playhead) of a slider bar that controls the current position of a video (e.g., a movie) being displayed on a display, as shown in fig. 5J to 5P and 5R to 5S (slider 504 of slider bar 506).
The device receives (704) a first input (e.g., a drag gesture on a touch-sensitive surface) corresponding to a first interaction (e.g., an interaction for adjusting a position of a slider) with a control user interface object on a display. Upon receiving (706) a first input corresponding to a first interaction with a control user interface object on the display (e.g., concurrent with at least a portion of the first input), the device provides (708) data to the display for moving the control user interface object from a first location on the display to a second location on the display that is different from the first location on the display according to the first input. For example, as shown in fig. 5K-5L, drag gesture 512 drags slider 504 from location 504-1 to location 504-2.
In some embodiments, in response to receiving a first input corresponding to a first interaction with a control user interface object on a display: the device provides (710) data to the display to move the control user interface object from a first location on the display to a second location on the display that is different from the first location on the display according to the first input and to visually distinguish the control user interface object during movement of the control user interface object from the first location on the display to the second location on the display according to the first input (e.g., the device displays a tail of the slider movement and/or lengthens or stretches the slider in a direction from the first location on the display to the second location on the display, as shown in fig. 5K-5L, fig. 5M-5N, and fig. 5R-5S).
Upon receiving a first input corresponding to a first interaction of the control user interface object on the display, the device provides (712) first sound information to the audio system to provide a first sound output having one or more characteristics different from respective parameters controlled by the control user interface object and that vary according to movement of the control user interface object from a first position on the display to a second position on the display (e.g., the first sound output is audio feedback corresponding to movement of the slider control). In some embodiments, in response to receiving the first input, data is provided to a display and first sound information is provided to an audio system. In some embodiments, the first sound output is provided by the audio system during a duration of a first interaction with the control user interface object.
In some embodiments, in accordance with a determination that the first input meets the first input criteria, the first sound output has (714 of fig. 7B) a first set of characteristics (e.g., pitch, volume). In accordance with a determination that the first input meets a second input criterion, the first sound output has a second set of characteristics (e.g., pitch, volume) that is different from the first set of characteristics. For example, if the first input moves faster than the predetermined speed threshold, the volume of the first sound output increases, and if the first input moves slower than the predetermined speed threshold, the volume of the first sound output decreases.
In some embodiments, the one or more characteristics include (716) a pitch of the first sound output, a volume of the first sound output, and/or a distribution of the first sound output (also referred to as "balance") over the plurality of spatial channels. In some embodiments, the one or more characteristics include a timbre of the first sound output and/or one or more audio envelope characteristics (e.g., attack, sustain, delay, and/or release characteristics) of the first sound output. For example, as illustrated in fig. 5I-5R, the device changes the pitch and balance of the sound output according to controlling movement of the user interface object from a first position to a second position on the display. In some embodiments, only one characteristic of the sound output (e.g., pitch or balance) is based on controlling movement of the user interface object.
In some embodiments, the audio system is coupled with a plurality of speakers corresponding to a plurality of spatial channels (718). In some embodiments, the plurality of spatial channels includes a left channel and a right channel. In some embodiments, the plurality of spatial channels includes a left channel, a right channel, a front channel, and a rear channel. In some embodiments, the plurality of spatial channels includes a left channel, a right channel, an upper channel, and a lower channel. In some embodiments, the plurality of spatial channels includes a left channel, a right channel, a front channel, a rear channel, an upper channel, and a lower channel. In some embodiments, providing the first sound information for providing the first sound output to the audio system includes determining a distribution (also referred to as a balance) of the first sound output over a plurality of spatial channels based on a direction of movement of the control user interface object from a first location on the display to a second location on the display. In some embodiments, providing the first sound information for providing the first sound output to the audio system includes adjusting a distribution of the first sound output over the plurality of spatial channels according to a direction controlling movement of the user interface object from a first location on the display to a second location on the display. For example, controlling a leftward movement of the user interface object results in a leftward movement of the distribution of the first sound output over the plurality of spatial channels; controlling rightward movement of the user interface object results in rightward movement of the distribution of the first sound output over the plurality of spatial channels. In some embodiments, the first sound information includes information for providing a first sound output according to the determined distribution of the first sound output over the plurality of spatial channels.
In some embodiments, the audio system is coupled (720) with a plurality of speakers corresponding to a plurality of spatial channels (e.g., as described above). In some embodiments, providing the first sound information for providing the first sound output to the audio system includes determining a distribution of the first sound output over a plurality of spatial channels (e.g., a ratio of an intensity of the first sound output to be output through the left channel and an intensity of the first sound output to be output through the right channel) as a function of a position of the control user interface object on the display during movement of the control user interface object from the second position on the display to the third position on the display. In some embodiments, providing the first sound information for providing the first sound output to the audio system includes adjusting a distribution of the first sound output over the plurality of spatial channels according to a position of the control user interface object on the display during movement of the control user interface object from the second position on the display to the third position on the display. For example, when the slider of the horizontal slider bar is positioned to the left of the midpoint of the slider bar, the distribution of the first sound output across the plurality of spatial channels shifts to the left; when the slider of the horizontal slider bar is positioned to the right of the midpoint of the slider bar, the distribution of the first sound output across the plurality of spatial channels shifts to the right.
In some embodiments, the first sound information includes information for providing a first sound output according to the determined distribution of the first sound output over the plurality of spatial channels. For example, sound output is played with panning values (e.g., stereo panning (left/right) or other multi-channel panning) determined based on the position of the control user interface object.
In some embodiments, providing the first sound information for providing the first sound output to the audio system includes (722 of fig. 7C) determining a volume of the first sound output based on controlling a speed of movement of the user interface object from a first location on the display to a second location on the display. In some embodiments, providing the first sound information for providing the first sound output to the audio system includes adjusting a volume of the first sound output in accordance with a speed controlling movement of the user interface object from a first location on the display to a second location on the display. In some embodiments, the first sound information includes information that provides a first sound output according to the determined volume of the first sound output. In some embodiments, the speed of movement of the control user interface object from the first position on the display to the second position on the display is higher than the speed of movement of the control user interface object (described with reference to operation 728) from the second position on the display to the third position on the display, and the volume of the first sound output is lower than the volume of the second sound output (described with reference to operation 728) (e.g., when the control user interface object moves faster, the volume of the sound output decreases). In some embodiments, the speed of movement of the control user interface object (e.g., the slider of the slider bar) from the first position on the display to the second position on the display is higher than the speed of movement of the control user interface object from the second position on the display to the third position on the display (described with reference to operation 728), and the volume of the first sound output is higher than the volume of the second sound output (described with reference to operation 728) (e.g., when the control user interface object moves faster, the volume of the sound output increases).
In some embodiments, the control user interface object is a slider (fig. 5J-5S) on a slider bar (724). The pitch of the first sound output varies according to a position of the control user interface object on the slider (e.g., a distance of the control user interface object from one end of the slider, a distance of the control user interface object from a center of the slider, or a distance of the control user interface object from a nearest end of the slider). In some embodiments, the first sound output has a first pitch when the slider is in the first position and a second pitch lower than the first pitch when the slider is in the second position to the left of the first position. In some embodiments, the pitch is lower as the slider moves farther to the right.
In some embodiments, after responding to the first input, the device receives (726) a second input corresponding to a second interaction with a control user interface object on the display (e.g., an interaction for further adjusting a position of a slider of the slider). Responsive to and upon receiving a second interaction corresponding to a control user interface object on the display: the device provides (728) data to the display for moving the control user interface object from a second location on the display to a third location on the display that is different from the second location on the display based on the second input; and providing second sound information to the audio system for providing a second sound output having one or more characteristics that change in accordance with controlling movement of the user interface object from a second position on the display to a third position on the display (e.g., the second sound output is audio feedback corresponding to additional movement of the slider control). In some embodiments, the respective sound output has a first pitch and the subsequent sound output has a second pitch different from the first pitch.
In some embodiments, the control user interface object is a slider on a slider bar (730 of FIG. 7D). The second position on the display is not the end point of the slider. In some embodiments, the third location on the display (or another location that is a previous location on the display) is not the end point of the slider. In some embodiments, the device receives (732) input corresponding to respective interactions with control user interface objects on the display. In response to receiving an input corresponding to a respective interaction with a control user interface object on the display: the device provides (734) data to the display for moving the control user interface object to a fourth position on the display in accordance with the input, wherein the fourth position on the display is an end point of the slider bar. In some embodiments, the user interface object is controlled to move from a second position on the display. In some embodiments, the user interface object is controlled to move from a third location on the display (or another location that is a previous location on the display). In some embodiments, the fourth location on the display is different from the second location on the display. In some embodiments, the fourth location on the display is different from the third location on the display. The device also provides sound information to the audio system for providing a third sound output to indicate that the control user interface object is located at the end of the slider bar, wherein the third sound output is different from the first sound output. In some embodiments, the third sound output is different from the second sound output. In some embodiments, the fourth sound output is a bouncing sound (e.g., a reverberation "s-m") that provides audio feedback corresponding to the rubber band effect (e.g., as illustrated in fig. 5O-5P).
It should be understood that the particular order in which the operations in fig. 7A-7C have been described is merely exemplary and is not intended to indicate that the described order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize various ways to reorder the operations described herein. Further, it should be noted that details regarding other methods described herein (e.g., methods 600, 800, 900, and 1000) are also applicable in a similar manner to method 700 described above with respect to fig. 7A-7C. For example, the user interface objects, user interfaces, and sound outputs described above with reference to method 700 optionally have one or more of the characteristics of the user interface objects, user interfaces, and sound outputs described herein with reference to other methods described herein (e.g., methods 600, 800, 900, and 1000). For the sake of brevity, these details are not repeated here.
Fig. 8A-8C are flowcharts illustrating a method 800 of providing sound information corresponding to a user's interaction with a user interface object, according to some embodiments. Method 800 is performed at an electronic device (e.g., device 300 of fig. 3 or portable multifunction device 100 of fig. 1A) in communication with a display and an audio system. In some embodiments, the electronic device communicates with a user input device (e.g., a remote user input device such as a remote control) having a touch-sensitive surface. In some embodiments, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface. In some embodiments, the user input device is integrated with the electronic device. In some embodiments, the user input device is separate from the electronic device. Some of the operations in method 800 are optionally combined and/or the order of some of the operations is optionally changed.
In some embodiments, the audio system includes a digital-to-analog converter. In some embodiments, the audio system includes a signal amplifier. In some embodiments, the audio system includes one or more speakers. In some embodiments, the audio system is integrated with a display (e.g., a television with audio processing circuitry and speakers). In some embodiments, the audio system is distinct and separate from the display (e.g., the display screen and the separate audio system). In some embodiments, the device includes a touch-sensitive surface. In some embodiments, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface (e.g., the touch-sensitive surface is integrated with a remote control of a television).
Some of the operations in method 800 are optionally combined and/or the order of some of the operations is optionally changed. In some embodiments, the user interfaces in fig. 5T-5 AA are used to illustrate the process described with respect to method 800.
As described below, the method 800 provides a sound output corresponding to a user's interaction with a user interface object. The method reduces the cognitive burden on the user when interacting with the user interface object (e.g., by moving the current focus), thereby creating a more efficient human-machine interface. Providing a sound output helps a user to manipulate user interface objects faster and more efficiently, thereby conserving power.
The device provides (802) data for presenting a first user interface having a plurality of user interface objects to a display, wherein a current focus is on the first user interface object of the plurality of user interface objects. In some embodiments, the first user interface object is visually distinguished from other user interface objects of the plurality of user interface objects when the current focus is on the first user interface object. For example, as shown in fig. 5T, the application icons 532-e are visually distinguished from other application icons 532 by being slightly larger and having highlighted boundaries.
While the display is presenting the first user interface, the device receives (804) an input (e.g., a drag gesture on the touch-sensitive surface) corresponding to a request to change a position of a current focus in the first user interface, the input having a direction and an amplitude (e.g., a speed and/or distance of the input). In some embodiments, the electronic device communicates with a remote control and receives input from the remote control. For example, as shown in fig. 5T, user input 536 is detected on the touch-sensitive surface 451 of the remote control 5001.
In response to receiving an input corresponding to a request to change a position of a current focus in the first user interface, the device provides (806) data to the display for moving the current focus from the first user interface object to the second user interface object, wherein the second user interface object is selected for the current focus according to a direction and/or magnitude of the input. For example, as shown in fig. 5T-5U, the device moves the current focus from application icon 532-e (fig. 5T) to application icon 532-d (fig. 5U) in response to user input 536. In some embodiments, the second user interface object is visually distinguished from other user interface objects of the plurality of user interface objects when the current focus is on the second user interface object. In some embodiments, when the current focus is on a respective user interface object, the respective user interface object is visually distinguished from other user interface objects of the plurality of user interface objects.
Further, in response to receiving an input corresponding to a request to change a position of the current focus in the first user interface, the device provides first sound information to the audio system for providing a first sound output corresponding to movement of the current focus from the first user interface object to the second user interface object, wherein the first sound output is provided concurrently with display of the current focus moving from the first user interface object to the second user interface object, and a pitch of the first sound output is determined based at least in part on: the size of the first user interface object (e.g., low pitch if the first user interface object is large and high pitch if the first user interface object is small), the type of the first user interface object (e.g., low pitch if the first user interface object is a category icon and high pitch if the first user interface object is a movie poster), the size of the second user interface object (low pitch if the second user interface object is large and high pitch if the second user interface object is small), and/or the type of the second user interface object (e.g., low pitch if the second user interface object is a category icon and high pitch if the second user interface object is a movie poster). For example, the sound output 538-1 shown in FIG. 5U for the application icon 532-d is higher in pitch than the sound output 538-9 shown in FIG. 5CC for the movie icon 534-c that is larger than the application icon 532-d.
In some embodiments, the first sound output is characterized by a distribution over a plurality of spatial channels, one or more audio envelope characteristics (e.g., attack, decay, sustain, and/or release), timbre, volume, and/or pitch. In some embodiments, the distribution over a plurality of spatial channels, one or more audio envelope characteristics (e.g., attack, decay, sustain, and/or release), timbre, volume, and/or pitch are determined based on any of: the size of the first user interface object, the type of the first user interface object, the size of the second user interface object, the type of the second user interface object, the magnitude of the input, and/or the direction of the input.
In some embodiments, the pitch of the first sound output is determined based on a characteristic (e.g., size and/or type) of the second user interface object (e.g., and not any characteristic of the first user interface object). In some embodiments, the first sound output is an "enter" sound or a "move to" sound that provides the user with audio feedback indicating the size and/or type of user interface object she will navigate to. In some embodiments, the pitch of the first sound output is determined based on a characteristic (size and/or type) of the first user interface object (e.g., and not any characteristic of the second user interface object). In some embodiments, the first sound output is an "exit" sound or a "remove" sound that provides the user with audio feedback indicating the size and/or type of user interface that she will navigate away.
In some embodiments, the volume of the first sound output is determined (808) based on the magnitude of the input (e.g., the speed and/or distance of the input). For example, in accordance with a determination that the speed and/or distance of the input exceeds a predetermined threshold, the volume of the first sound output is reduced.
In some embodiments, one or more user interface objects are located between the first user interface object and the second user interface object on the display, and the current focus is moved from the first user interface object to the second user interface object via the one or more user interface objects according to the direction and/or magnitude of the input (e.g., the current focus in fig. 5W-5X is moved from application icon 532-b to application icon 532-e via application icon 532-c and application icon 532-d).
In some embodiments, the volume of the first sound output is reduced (810) in accordance with predetermined input criteria (e.g., speed and/or distance criteria) being met for the magnitude of the input. For example, the first sound output is a quieter "move to" sound (e.g., as described above) when the second user interface object is farther away on the display than when the second user interface object is closer to the first user interface object, as shown in fig. 5W-5X. In some embodiments, a respective number (e.g., count) of user interface objects is located between the first user interface object and the second user interface object on the display. The current focus is moved from the first user interface object to the second user interface object via the user interface object located between the first user interface object and the second user interface object, and the volume of the first sound output is based on a respective number (e.g., count) of user interface objects located between the first user interface object and the second user interface object on the display (e.g., giving user audio feedback indicating how many user interface objects are moving).
In some embodiments, in accordance with a determination that the magnitude of the input meets a predetermined input criterion, the release of the first sound output is reduced (812). For example, for navigation across discrete objects (e.g., representations of multiple videos in a video selection user interface such as a television home screen), the first sound output has a shorter release (e.g., with respect to speed and/or distance) when the amplitude meets a predetermined input criterion, and a longer release (e.g., when the speed of the first input is slower, which gives the user more gradual audio feedback indicating the input).
In some embodiments, the distribution of the first sound output over the plurality of spatial channels is adjusted (814) according to the position of the second user interface object in the first user interface (e.g., the left audio channel increases and/or the right audio channel decreases when the current focus moves to the user interface object located to the left of the first user interface, and the right audio channel increases and/or the left audio channel decreases when the current focus moves to the user interface object located to the right of the first user interface, as shown in fig. 5 CC-5 EE). In some embodiments, the distribution of the first sound output over the plurality of spatial channels is adjusted according to the relative position (e.g., up/down, left or right) of the second user interface object with respect to the first user interface object. In some embodiments, the distribution of the first sound output over the plurality of spatial channels is adjusted according to a movement (e.g., up/down, left or right) of the current focus from the first user interface object to the second user interface object. In some embodiments, the plurality of spatial channels includes a left audio channel, a right audio channel, an upper audio channel, and a lower audio channel. For example, the upper audio channel increases and/or the lower audio channel decreases when the current focus moves to a user interface object located on the upper side of the first user interface, and the lower audio channel increases and/or the upper audio channel decreases when the current focus moves to a user interface object located on the lower side of the first user interface.
In some embodiments, the pitch of the first sound output is determined (816 of fig. 8B) based on the size of the second user interface object and/or the type of the second user interface object (e.g., and not based on the size of the first user interface object and/or the type of the first user interface object). In response to receiving an input corresponding to a request to change a position of the current focus in the first user interface, the device provides second sound information to the audio system for providing a second sound output corresponding to movement of the current focus from the first user interface object to the second user interface object, wherein a pitch of the second sound output is determined based at least in part on a size of the first user interface object and/or a type of the first user interface object (e.g., and not based on a size of the second user interface object and/or a type of the second user interface object). For example, the first sound output indicates that the current focus is "moved to" the second user interface object (e.g., an incoming sound), and the second sound output indicates that the current focus is "moved away from" (e.g., an outgoing sound) the first user interface object. As shown in fig. 5T-5U, in conjunction with moving the current focus from the application icon 532-e to the application icon 532-d, the sound output 540-1 (exemplary exit sound) and the sound output 538-1 (exemplary entry sound) are sequentially provided. In some embodiments, the second sound output begins before the first sound output begins. In some embodiments, the second sound output terminates before the first sound output terminates. In some embodiments, at least a portion of the second sound output is provided concurrently with the first sound output. In some embodiments, the first sound output begins after the second sound output is terminated (e.g., the first sound output and the second sound output do not overlap).
In some embodiments, the first user interface includes three or more user interface objects having different sizes, and the three or more user interface objects correspond to sound outputs having one or more different sound characteristics (e.g., different pitches).
In some embodiments, in response to receiving one or more inputs corresponding to one or more requests to change a position of a current focus in the first user interface: the device provides (818) the display with data for moving the current focus from the second user interface object to the third user interface object. The device also provides third sound information to the audio system for providing a third sound output corresponding to movement of the current focus from the second user interface object to the third user interface object, wherein the third sound output is provided concurrently with the display of the current focus moving from the second user interface object to the third user interface object. The device also provides data to the display for moving the current focus from the third user interface object to the fourth user interface object, and provides fourth sound information to the audio system for providing a fourth sound output corresponding to the movement of the current focus from the third user interface object to the fourth user interface object. A fourth sound output is provided concurrently with the current focus moving from the third user interface object to the display of the fourth user interface object. For example, the current focus moves to icon 550-d (FIG. 5Y) in the case of sound output 538-7, then to application icon 532-e (FIG. 5Z) in the case of sound output 538-8, and to movie icon 534-c (FIG. 5 CC) in the case of sound output 538-9.
In some embodiments, the sound output corresponding to the movement of the current focus to the largest of the second, third, and fourth user interface objects has a lower pitch than the respective sound outputs corresponding to the movement of the remaining two of the current focus to the second, third, and fourth user interface objects (e.g., when the third user interface object is the largest of the second, third, and fourth user interface objects, the sound output corresponding to the movement of the current focus to the third user interface object has a lower pitch than the pitch of the sound output corresponding to the movement of the current focus to the second user interface object and the pitch of the sound output corresponding to the movement of the current focus to the fourth user interface object). For example, in fig. 5Y-5 CC, movie icon 534-c is the largest object of icons 550-d, application icon 532-e, and movie icon 534-c, and the corresponding sound output 538-9 has the lowest pitch of the sound outputs associated with icons 550-d, application icon 532-e, and movie icon 534-c.
In some embodiments, the sound output corresponding to the movement of the current focus to the smallest of the second, third, and fourth user interface objects has a higher pitch than the respective sound outputs corresponding to the movement of the remaining two of the current focus to the second, third, and fourth user interface objects (e.g., when the second user interface object is the smallest of the second, third, and fourth user interface objects, the sound output corresponding to the movement of the current focus to the second user interface object has a higher pitch than the pitch of the sound output corresponding to the movement of the current focus to the third user interface object and the pitch of the sound output corresponding to the movement of the current focus to the fourth user interface object). For example, in fig. 5Y-5 CC, the application icon 532-e is the smallest object of the icons 550-d, 532-e, and 534-c, and the corresponding sound output 538-9 has the highest pitch of the sound outputs associated with the icons 550-d, 532-e, and 534-c.
When the display presents a first user interface having a plurality of user interface objects, wherein the first user interface having a plurality of user interface objects is included in the hierarchy of user interfaces, the device receives (820 of fig. 8C) an input (e.g., an input 574 pressing a menu button 5002, as shown in fig. 5GG, or a user input 554 on the touch-sensitive surface 451, as shown in fig. 5Z) corresponding to a request to replace the first user interface with a second user interface in the hierarchy of user interfaces. To describe these and related features, assume that the exemplary hierarchy of user interfaces includes a screensaver user interface (e.g., screensaver user interface 517 in FIG. 5 HH), a home screen user interface (e.g., home screen user interface 530 in FIG. 5 GG) below the screensaver user interface, and an application user interface (e.g., game user interface 594 in FIG. 5 AA) below the home screen user interface (e.g., the hierarchy of user interfaces includes screensaver user interface 517, home screen user interface 530, and game user interface 594 in a top-to-bottom hierarchy order). In response to receiving an input corresponding to a request to replace the first user interface with the second user interface: the device provides 822 data to the display for replacing the first user interface with the second user interface (e.g., screen saver user interface 517 replaces home screen user interface 530 in response to input 574 pressing menu button 5002, as shown in fig. 5 GG-5 HH, and game user interface 594 replaces home screen user interface 530 in response to user input 554 on touch sensitive surface 451). In accordance with a determination that the first user interface is located above the second user interface in the hierarchy of user interfaces (e.g., navigating from a higher user interface to a lower user interface in the exemplary hierarchy, such as navigating from home screen user interface 530 to game user interface 594), the device provides fifth sound information to the audio system for providing a fifth sound output (e.g., high pitch sound, such as sound output 556-1 in FIG. 5 AA). In some embodiments, the fifth sound output is provided concurrently with replacing the first user interface with the second user interface. In some embodiments, the first user interface is located immediately above the second user interface in the hierarchy of user interfaces (e.g., home screen user interface 530 is located immediately above game user interface 594 in the exemplary hierarchy). In accordance with a determination that the first user interface is located below the second user interface in the hierarchy of user interfaces (e.g., navigating from a lower user interface to a higher user interface in the exemplary hierarchy, such as navigating from the home screen user interface 530 to the screensaver user interface 517), the device provides sixth sound information to the audio system for providing a sixth sound output (e.g., a low pitch sound, such as sound output 560-3 in fig. 5 HH) that is different from the fifth sound output. In some embodiments, the sixth sound output is provided concurrently with replacing the first user interface with the second user interface. In some embodiments, the first user interface is located immediately below the second user interface in the hierarchy of user interfaces (e.g., home screen user interface 530 is located immediately below screensaver user interface 517 in the exemplary hierarchy). Thus, the fifth sound output and/or the sixth sound output may be used to indicate whether the user will navigate to the top or bottom of the hierarchy.
In some embodiments, the fifth sound output is different from the first sound output. In some embodiments, the fifth sound output is different from the second sound output. In some embodiments, the fifth sound output is different from the third sound output. In some embodiments, the fifth sound output is different from the fourth sound output. In some embodiments, the sixth sound output is different from the first sound output. In some embodiments, the sixth sound output is different from the second sound output. In some embodiments, the sixth sound output is different from the third sound output. In some embodiments, the sixth sound output is different from the fourth sound output. In some embodiments, the sixth sound output is different from the fifth sound output.
When the display presents the first user interface, the device receives (824) input corresponding to a request to activate a user interface object having a current focus (e.g., the user interface object is overlapped, surrounded, or in the vicinity of the current focus). In response to receiving an input corresponding to a request to activate a user interface object having a current focus, in accordance with a determination that the first user interface object has a current focus, the device provides (826) seventh sound information to the audio system for providing a seventh sound output, the seventh sound output corresponding to activation of the first user interface object. For example, the application icon 532-e (FIG. 5Z) is activated in conjunction with providing the sound output 556-1 (FIG. 5 AA). In accordance with a determination that the second user interface object has a current focus, the device provides eighth sound information to the audio system for providing an eighth sound output, the eighth sound output corresponding to activation of the second user interface object. The eighth sound output is different from the seventh sound output. For example, in conjunction with providing sound output 556-2 (FIG. 5 FF), movie icon 534-a (FIG. 5 EE) is activated. The relationship between the one or more characteristics of the sound output corresponding to the movement of the current focus to the first user interface object and the one or more characteristics of the second sound output corresponds to the relationship between the one or more characteristics of the seventh sound output and the one or more characteristics of the eighth sound output. For example, when the first user interface object is smaller than the second user interface object, the sound output corresponding to the movement of the current focus to the first user interface object has a higher pitch than the sound output corresponding to the movement of the current focus to the second user interface object, and the sound output corresponding to the activation of the first user interface object has a higher pitch than the sound output corresponding to the activation of the second user interface object (e.g., the sound output 538-8 corresponding to the movement of the current focus to the application icon 532-e has a higher pitch than the sound output 538-11 corresponding to the movement of the current focus to the movie icon 534-a in fig. 5EE, and the sound output 556-1 corresponding to the activation of the application icon 532-e has a higher pitch than the sound output 556-2 corresponding to the activation of the movie icon 534-a in fig. 5 FF).
In some embodiments, the corresponding sound output is a single tone or chord (e.g., the sound "sting"). In some embodiments, the respective sound output is a single tone in the melody or a sound "sting (DING)" in a chord (e.g., a short melody "DING DONG (DING ping)", where the melody includes at least two tones and chords). In some embodiments, when a single tone or chord in the melody is provided (or determined, modified, etc.) in accordance with the determined characteristic, a sound output is provided (or determined, modified, etc.) in accordance with the determined characteristic (e.g., when the sound "sting" in the melody "dingdong" is provided with the determined pitch, a corresponding sound output with the determined pitch is provided). In some embodiments, when the entire melody is provided (or determined, modified, etc.) according to the determined characteristics, the entire melody is provided (or determined, modified, etc.) as a melody according to the determined characteristics and a sound output (e.g., sound output is a V-I rhythm, where I represents a root chord determined according to the determined pitch and V is a chord of five scales above the root chord I) is provided (or determined, modified, etc.). In some embodiments, the pitch is a perceived pitch.
It should be understood that the particular order in which the operations in fig. 8A-8C have been described is merely exemplary and is not intended to indicate that the described order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize various ways to reorder the operations described herein. Further, it should be noted that details regarding other methods described herein (e.g., methods 600, 700, 900, and 1000) are also applicable in a similar manner to method 800 described above with respect to fig. 8A-8C. For example, the user interface objects, user interfaces, and sound outputs described above with reference to method 800 optionally have one or more of the characteristics of the user interface objects, user interfaces, and sound outputs described herein with reference to other methods described herein (e.g., methods 600, 700, 900, and 1000). For the sake of brevity, these details are not repeated here.
Fig. 9A-9C are flowcharts illustrating a method 900 of providing sound information for a video information user interface, according to some embodiments. Method 900 is performed at an electronic device (e.g., device 300 of fig. 3 or portable multifunction device 100 of fig. 1A) in communication with a display and an audio system. In some embodiments, the electronic device communicates with a user input device (e.g., a remote user input device such as a remote control) having a touch-sensitive surface. In some embodiments, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface. In some embodiments, the user input device is integrated with the electronic device. In some embodiments, the user input device is separate from the electronic device. Some operations in method 900 may be optionally combined and/or the order of some operations may be optionally changed.
In some embodiments, the audio system includes a digital-to-analog converter. In some embodiments, the audio system includes a signal amplifier. In some embodiments, the audio system includes one or more speakers. In some embodiments, the audio system is integrated with a display (e.g., a television with audio processing circuitry and speakers). In some embodiments, the audio system is distinct and separate from the display (e.g., the display screen and the separate audio system).
Some operations in method 900 may be optionally combined and/or the order of some operations may be optionally changed. In some embodiments, the user interfaces in fig. 5 II-5 MM are used to illustrate the process described with respect to method 900.
As described below, pausing playback of the video includes providing data for presenting a plurality of still images from the video while playback of the video is paused. The plurality of still images from the video helps the user understand the context of the video around where playback of the video is paused, even before playback of the video is resumed. Thus, the user can understand the context of the video shortly after playback of the video is resumed.
The device provides (902) data for presenting a first video information user interface comprising descriptive information about the first video to a display. For example, the first video information user interface (e.g., the product page view 572 in fig. 5 II) includes information such as title, run-time, episode summary, rating, available for playing the first video, etc.
In some embodiments, prior to the display presenting the first video information user interface: the device provides (904) data for presenting a video selection user interface comprising a representation of a plurality of videos (e.g., an icon with a poster and/or title corresponding to each of the plurality of videos) to a display. The device receives an input corresponding to a selection of a representation of a first video of the plurality of videos, wherein in response to receiving the input corresponding to the selection of the representation of the first video, a first video information user interface for the first video is presented. For example, the user interface in FIG. 5GG is displayed prior to the display of the user interface in FIG. 5II, and the user interface in FIG. 5II is presented in response to the user activating movie icon 534-a (FIG. 5 GG).
The device provides (906) to the audio system for providing a first sound output corresponding to (e.g., based on) the first video during presentation of the first video information user interface by the display. In some embodiments, the sound information is audio based on the category of the first video (e.g., dark ambient sounds of drama or bright ambient sounds of comedy, etc.). In some embodiments, the category of the first video is determined using metadata associated with the video (e.g., metadata indicating a classification of the video or one or more categories of the first scene in the video). In some embodiments, the sound information is audio generated from sound or music in the first video itself (e.g., the audio is audio from a sound track of the first video). In some embodiments, the sound information is audio selected as a tone corresponding to a particular scene in the first video. For example, in some embodiments, the device analyzes the color distribution of the first scene of the first video to determine whether the scene is "bright" or "dark" and matches the audio as "bright" or "dark. In some embodiments, the first sound output loops (repeats) when the first video information user interface is displayed with respect to the first video.
In some embodiments, the first video information user interface includes (908) a plurality of user interface objects. For example, as shown in FIG. 5II, the user interface includes an available item of "Watch Now" and an available item of "trailer/preview". A first user interface object of the plurality of user interface objects is configured to: when selected (or activated), the initiating electronic device provides sound information to the audio system for providing sound output corresponding to at least a portion of the first sound track of the first video (e.g., activates a play user interface object in the first video information user interface to initiate outputting of a gunshot from the first video). A second user interface object of the plurality of user interface objects is configured to: when selected (or activated), the initiating electronic device provides sound information to the audio system for providing sound output corresponding to at least a portion of a second sound track of a second video that is different from the first sound track of the first video (e.g., activating a trailer user interface object in the first video information user interface initiates output Ma Sheng from the first video).
When the display presents a first video information user interface that includes descriptive information about the first video, the device receives (910) an input corresponding to a request for playback of the first video (e.g., receives an input corresponding to activation of a play icon in the video information user interface or activation of a play button on a remote control in communication with the device). In response to receiving an input corresponding to a request for playback of the first video, the device provides (912) data to the display for replacement of the presentation of the first video information user interface with playback of the first video (e.g., video playback view 500 in fig. 5 JJ). For example, the user decides to watch the first video and thus activates playback of the first video.
During playback of the first video, the device receives (914 of fig. 9B) an input corresponding to a request to display a second video information user interface regarding the first video (e.g., receives an input 580 corresponding to activation of a pause or return icon or activation of a pause or return button (such as menu button 5002) on a remote control in communication with the device, as shown in fig. 5 JJ). In some embodiments, the second video information user interface for the first video is different from the first video information user interface for the first video. For example, the second video information is a "pause" screen that is different from the product page view. In some embodiments, the second video information user interface for the first video is the same as the first video information user interface for the first video. In some embodiments, when the user pauses the video, the device returns to the first video information user interface.
In response to receiving an input corresponding to a request for displaying a second video information user interface regarding the first video: the device provides (916) the display with data for replacing playback of the first video with a second video information user interface (e.g., product page view 572 in fig. 5 KK) with respect to the first video. The device provides a second sound output corresponding to (e.g., based on) the first video that is different from the first sound output to the audio system for providing during presentation of the second video information user interface by the display. In some embodiments, the second sound output loops (repeats) when the second video information user interface is displayed with respect to the first video.
In some embodiments, the second sound output is (918) a sound track of the first video corresponding to a location of the first video being played when an input corresponding to a request to display the second video information user interface is received. In some embodiments, the second audio output is selected from a sound track of the first video corresponding to a chapter of the first video that encompasses a location in the first video that is being played when an input corresponding to a request for displaying the second video information user interface is received.
In some embodiments, in accordance with a determination that an input corresponding to a request to display a second video information user interface (e.g., input 582 when an end subtitle is displayed, as shown in fig. 5 LL) is received within a predetermined duration from the end of the first video, an end subtitle sound track of the first video is selected (920) for the second video output. For example, if the first video ends close (e.g., close enough), the end subtitle sound track is played using the video information user interface.
In some embodiments, after initiating playback of the first video, the device receives (922 of fig. 9C) an input corresponding to a request to pause the first video. In response to receiving an input corresponding to a request to pause the first video: the device pauses (924) playback of the first video at a first playback position in a timeline of the first video and provides data to a display for presentation of one or more selected still images from the first video, wherein the one or more selected still images are selected based on the first playback position at which the first video was paused (e.g., if an input corresponding to a request to pause the first video is received while the audio system outputs the first sound track of the first video, the audio system continues to output the first sound track of the first video while the first video is paused). The device further provides sound information to the audio system for providing a sound output corresponding to a sound track of the first video at the first playback position.
In some embodiments, after initiating playback of the first video, the device receives (926) an input corresponding to a request to pause the first video. In response to receiving an input corresponding to a request to pause the first video: the device pauses (928) playback of the first video at a first playback position in a timeline of the first video; and providing data (e.g., fig. 5 OO-5 SS) for presenting the one or more selected still images from the first video to the display. One or more selected still images are selected based on a first playback position at which the first video is paused. The device also provides sound information to the audio system for providing a sound output corresponding to one or more characteristics of the first video at the first playback position (e.g., beats of the original sound track, chords). In some embodiments, the method includes identifying beats and/or chords of the original sound track at or within a time window covering a predetermined duration of the first playback position, and selecting music different from the original sound track based on the beats and/or chords of the original sound track at the first playback position.
In some embodiments, a first sound output and/or a second sound output is selected (330) from a sound track of a first video. In some embodiments, the first sound output is theme music of the first video. In some embodiments, the first sound output is independent of a current playback position in the first video. For example, the first sound output is selected even before the first video is played.
In some embodiments, the first sound output and/or the second sound output is selected (932) based on one or more characteristics of the first video (e.g., genre, user rating, comment rating, etc.) (e.g., from a set of sound tracks independent of the first video, such as sound tracks from various genres of movies). For example, electronic music is selected for a science fiction movie, and western music is selected for a western movie (e.g., based on metadata associated with the first video). For example, music starting with a fast beat and/or with a major chord is selected for movies having user ratings and/or comment ratings above a predetermined criterion, and music starting with a slow beat and/or with a minor chord is selected for movies having user ratings and/or comment ratings below a predetermined criterion.
It should be understood that the particular order in which the operations in fig. 9A-9C have been described is merely exemplary and is not intended to indicate that the described order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize various ways to reorder the operations described herein. Further, it should be noted that details regarding other methods described herein (e.g., methods 600, 700, 800, and 1000) are also applicable in a similar manner to method 900 described above with respect to fig. 9A-9C. For example, the user interface objects, user interfaces, sound outputs, and still images described above with reference to method 900 may optionally have one or more of the characteristics of the user interface objects, user interfaces, sound outputs, and still images described herein with reference to other methods described herein (e.g., methods 600, 700, 800, and 1000). For the sake of brevity, these details are not repeated here.
Fig. 10A-10B illustrate a flowchart of a method 1000 of providing audiovisual information while a video is in a pause state in accordance with some embodiments. Method 1000 is performed at an electronic device (e.g., device 300 of fig. 3 or portable multifunction device 100 of fig. 1A) in communication with a display (and in some embodiments, a touch-sensitive surface). In some embodiments, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface. Some of the operations of method 10 are optionally combined and/or the order of some of the operations is optionally changed.
As described below, the method 1000 provides an intuitive way for providing audiovisual information when a video is in a paused state. The method reduces the cognitive burden on the user when viewing audiovisual information while the video is in a paused state, thereby creating a more efficient human-machine interface. Enabling the user to view audiovisual information while the video is in a paused state also saves power.
The device 100 provides 1002 data for presenting the first video to a display. For example, data for rendering a movie or television program (e.g., video playback view 500 in fig. 5 NN). While the display is presenting (e.g., playing back) the first video, the device receives (1004) input corresponding to a user request to pause the first video. For example, an input corresponding to activation of a pause icon, a pause gesture on the device or on a touch-sensitive surface on a remote control in communication with the device, or activation of a pause button on a remote control in communication with the device is received (e.g., input 586 on play/pause button 5004 in fig. 5 NN).
In response to receiving an input corresponding to a user request to pause the first video, the device pauses (1006) presentation of the first video at a first playback position in a timeline of the first video. After pausing presentation of the first video at the first playback position in the timeline of the first video and when pausing presentation of the first video, the device provides (1008) data to the display for presenting a plurality of selected still images (e.g., automatically selected still images) from the first video, wherein the plurality of selected still images are selected based on the first playback position at which the first video was paused. For example, the device provides data for presenting the plurality of selected still images to the display, as shown in fig. 5OO through 5 SS.
In some embodiments, the plurality of selected still images are presented sequentially while the first video is paused. In some embodiments, the selected still images are presented in chronological order while the first video is paused. In some embodiments, the selected still images are presented in a random order while the first video is paused. In some embodiments, the selected still images are sequentially provided to the display while the first video is paused. In some embodiments, the selected still images are presented in chronological order while the first video is paused. In some embodiments, the selected still images are presented in a random order while the first video is paused.
In some embodiments, a plurality of selected still images are selected (1010) from a range of playback positions for the first video between a first playback position in the timeline and a second playback position in the timeline that precedes the first playback position. In some embodiments, the second playback position in the timeline leads the first playback position by a predetermined time interval (1012). For example, a plurality of still images are selected from a range of 30 seconds, and the first playback position is at 0:45:00, and the second playback position is at 0:44:30. in some embodiments, the images are selected so as to exclude any images corresponding to playback of video subsequent to the first playback position. For example, the images are selected to avoid revealing any content about the storyline after the first playback position.
In some embodiments, the second playback position in the timeline leads the first playback position by a time interval determined after receiving an input corresponding to a request to pause the first video (1014). For example, a time interval is determined in response to receiving an input corresponding to a request to pause the first video or immediately prior to providing data to present a plurality of selected still images from the first video. In some embodiments, a longer time interval is used if the change in frames between a first playback position in the timeline and a second playback position in the timeline that precedes the first playback position is less than a first predetermined frame change criterion. In some embodiments, one of the predetermined frame change criteria is an amount of movement detected in the frame. For example, if there is very little movement within 30 seconds or 60 seconds of the leading first playback position, the time interval increases to 2 minutes of the leading first playback position. In some embodiments, a shorter time interval is used if the change in frames between a first playback position in the timeline and a second playback position in the timeline that precedes the first playback position is greater than a second predetermined frame change criterion. In some embodiments, one of the predetermined frame change criteria is the type of video being displayed. For example, if the first video is for classical musical performance, a longer time interval is used, and if the first video is an action movie, a shorter time interval is used.
In some embodiments, the plurality of selected still images of the video include (1016 of fig. 10B) still images that are discontinuous in the video for any other still image of the plurality of selected still images. For example, the still image is separated from any other still image by at least one frame of the video (e.g., one or more frames are located in the video between any two selected still images). In some embodiments, the still images are not played at the video rate (e.g., each still image may be displayed for a few seconds). In some embodiments, the plurality of selected still images includes (1018) a representative frame. In some embodiments, the method includes identifying representative frames based on a predetermined representative frame criterion (e.g., frames having characters and/or objects in a center region of the respective frame, frames having movements of objects less than a predetermined movement criterion, etc.).
In some embodiments, after pausing the presentation of the first video at the first playback position in the timeline of the first video and when the presentation of the first video is paused, the device provides (1020) to the display data for presenting an animation indicating a transition to slide mode (e.g., a countdown clock 588 in fig. 5 PP). In some embodiments, a plurality of selected still images are displayed in slide mode when the video is paused. In some embodiments, the animation that indicates the transition to slide mode includes (1022) a countdown clock. In some embodiments, displaying the plurality of images in the slide mode includes displaying a time stamp indicating a position in a timeline of the video corresponding to the first playback position (e.g., where the video was paused). In some embodiments, displaying the plurality of images in slide mode includes displaying a clock indicating a current time (e.g., current 8:03 pm).
In some embodiments, the device repeatedly (1024) provides data to the display for presenting a plurality of selected still images from the first video. In some embodiments, the sequential display of the plurality of selected still images is repeated (e.g., cycled). In some embodiments, the display of the plurality of selected still images is repeated in a randomized fashion. In some embodiments, the device provides (1026) data for presenting respective still images of the plurality of selected still images with the panning effect and/or the zooming effect to the display. In some embodiments, the device provides data for presenting respective still images of the plurality of selected still images having transparency to the display (e.g., when a next still image is displayed).
In some embodiments, the device communicates with an audio system, and the device provides (1028) sound information to the audio system for providing a first sound output corresponding to a first video being presented on the display. In some embodiments, the device provides (1030) sound information to the audio system for providing a sound output selected based on a first playback position at which the first video was paused.
It should be understood that the particular order in which the operations in fig. 10A-10B have been described is merely exemplary and is not intended to indicate that the described order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize various ways to reorder the operations described herein. Further, it should be noted that details regarding other methods described herein (e.g., methods 600, 700, 800, and 900) are also applicable in a similar manner to method 1000 described above with respect to fig. 10A-10B. For example, the user interface objects, user interfaces, still images, and sound outputs described above with reference to method 1000 may optionally have one or more of the characteristics of the user interface objects, user interfaces, still images, and sound outputs described herein with reference to other methods described herein (e.g., methods 600, 700, 800, and 900). For the sake of brevity, these details are not repeated here.
Fig. 11 illustrates a functional block diagram of an electronic device 1100 configured in accordance with the principles of various described embodiments, according to some embodiments. The functional blocks of the device are optionally implemented by hardware, software, firmware, or a combination thereof to implement the principles of the various described embodiments. Those skilled in the art will appreciate that the functional blocks described in fig. 11 may alternatively be combined or separated into sub-blocks to implement the principles of the various described embodiments. Accordingly, the description herein optionally supports any possible combination or separation of the functional blocks described herein or further definition.
As shown in fig. 11, the electronic device 1100 includes a processing unit 1106. In some embodiments, the electronic device 1100 communicates with a display unit 1102 (e.g., configured to display a user interface) and an audio unit 1104 (e.g., configured to provide sound output). In some embodiments, processing unit 1106 includes: a display enabling unit 1108, an audio enabling unit 1110 and a detecting unit 1112.
The processing unit 1106 is configured to provide (e.g., with the display enabling unit 1108) the display unit 1102 with data for presenting a user interface generated by the device. The user interface includes a first user interface object having a first visual characteristic. The user interface also includes a second user interface object having a second visual characteristic that is different from the first user interface object.
The processing unit 1106 is configured to provide (e.g., with an audio enabling unit 1110) sound information for providing sound output to the audio unit 1104. The sound output includes a first audio component corresponding to a first user interface object. The sound output also includes a second audio component corresponding to the second user interface object and different from the first audio component.
The processing unit 1106 is configured to: when a user interface is presented on the display unit 1102 and a sound output is provided, data for updating the user interface is provided to the display unit 1102 (e.g., with the display enabling unit 1108), and sound information for updating the sound output is provided to the audio unit 1104 (e.g., with the audio enabling unit 1110). Updating the user interface and updating the sound output includes: at least one of the first visual characteristics of the first user interface object is changed in conjunction with changing the first audio component corresponding to the first user interface object, and at least one of the second visual characteristics of the second user interface object is changed in conjunction with changing the second audio component corresponding to the second user interface object. Providing data for updating the user interface occurs independent of user input.
In some embodiments, the first visual characteristic includes a size and/or a location of the first user interface object.
In some embodiments, updating the user interface and updating the sound output further comprises: stopping displaying the first user interface object and stopping providing a sound output comprising a first audio component corresponding to the first user interface object; stopping displaying the second user interface object and stopping providing a sound output comprising a second audio component corresponding to the second user interface object; and/or displaying one or more respective user interface objects and providing a sound output comprising one or more respective audio components corresponding to the one or more respective user interface objects.
In some embodiments, the first audio component corresponding to the first user interface object is changed in accordance with a change to at least one of the first visual characteristics of the first user interface object.
In some embodiments, at least one of the first visual characteristics of the first user interface object is changed in accordance with the change to the first audio component.
In some embodiments, the pitch of the respective audio component corresponds to the initial size of the corresponding user interface object, the stereo balance of the respective audio component corresponds to the location of the corresponding user interface object on the display unit 1102, and/or the change in volume of the respective audio component corresponds to the change in size of the corresponding user interface object.
In some embodiments, the first visual characteristic of the first user interface object and the second visual characteristic of the second user interface object are determined independently of the user input.
In some embodiments, the second audio component is selected based at least in part on the first audio component.
In some embodiments, updating the sound output includes: it is determined whether a predetermined inactivity criterion is met, and in accordance with the determination that the predetermined inactivity criterion is met, the volume of the sound output is changed.
In some embodiments, the processing unit 1106 is configured to detect (e.g., using the detection unit 1112) user input. The processing unit 1106 is configured to provide (e.g., using the audio enabling unit 1110) to the audio unit 1104, in response to detecting a user input, sound information for changing respective audio components corresponding to respective user interface objects, and to provide (e.g., using the display enabling unit 1108) to the display unit 1102, data for updating the user interface and displaying one or more control user interface objects.
In some embodiments, the sound information provided to the audio unit 1104 includes information for providing a sound output including audio components that are not harmonious with the respective audio components corresponding to the respective user interface objects.
In some embodiments, the processing unit 1106 is configured to provide (e.g., using the display enabling unit 1108) data for displaying a user interface and updating the user interface to the display unit 1102 before detecting a user input, without providing sound information for providing a sound output to the audio unit 1104. The processing unit 1106 is configured to, upon detection of a user input, provide (e.g., using the display enabling unit 1108) data for displaying a user interface and updating the user interface to the display unit 1102, and provide sound information for providing a sound output and updating the sound output to the audio unit 1104.
Fig. 12 illustrates a functional block diagram of an electronic device 1200 configured in accordance with the principles of various described embodiments, according to some embodiments. The functional blocks of the device are optionally implemented by hardware, software, firmware, or a combination thereof to implement the principles of the various described embodiments. Those skilled in the art will appreciate that the functional blocks described in fig. 12 may alternatively be combined or separated into sub-blocks to implement the principles of the various described embodiments. Accordingly, the description herein optionally supports any possible combination or separation of the functional blocks described herein or further definition.
As shown in fig. 12, the electronic device 1200 communicates with a display unit 1202 (e.g., configured to display a user interface), an audio unit 1216 (e.g., configured to provide sound output), and in some embodiments, a remote control unit 1206, the remote control unit 1206 configured to detect user input and send it to the device 1200. In some embodiments, remote control unit 1206 includes a touch-sensitive surface unit 1204 configured to receive contacts. In some embodiments, processing unit 1208 includes: a display enabling unit 1210, a receiving unit 1212, and an audio enabling unit 1214.
According to some embodiments, the processing unit 1208 is configured to provide data (e.g., with the display enabling unit 1210) for presenting a user interface having a plurality of user interface objects (including a control user interface object at a first location on the display unit 1202) to the display unit 1202. The control user interface object is configured to control a respective parameter. The processing unit 1208 is configured to receive (e.g., with the receiving unit 1212) a first input (e.g., on the touch-sensitive surface unit 1204) corresponding to a first interaction with the control user interface object on the display unit 1202. The processing unit 1208 is configured to, upon receiving a first input corresponding to a first interaction with a control user interface object on the display unit 1202, provide data to the display unit 1202 for moving the control user interface object from a first location on the display unit 1202 to a second location on the display unit 1202 that is different from the first location on the display unit 1202 in accordance with the first input; and provides first sound information to the audio unit 1216 (e.g., with the audio enabling unit 1214) for providing a first sound output having one or more characteristics that are different from corresponding parameters controlled by the control user interface object and that change according to movement of the control user interface object from a first position on the display unit 1202 to a second position on the display unit 1202.
In some embodiments, in accordance with a determination that the first input meets a first input criterion, the first sound output has a first set of characteristics, and in accordance with a determination that the first input meets a second input criterion, the second sound output has a second set of characteristics different from the first set of characteristics.
In some embodiments, the processing unit 1208 is configured to receive (e.g., with the receiving unit 1212) a second input (e.g., on the touch-sensitive surface unit 1204) corresponding to a second interaction with the control user interface object on the display unit 1202 after responding to the first input. The processing unit 1208 is configured to, in response to and upon receiving a second input corresponding to a second interaction with the control user interface object on the display unit 1202, provide (e.g., with the display enabling unit 1210) data to the display unit 1202 for moving the control user interface object from a second location on the display unit 1202 to a third location on the display unit 1202 different from the second location on the display unit 1202 in accordance with the second input. The processing unit 1208 is further configured to provide (e.g., with the audio enabling unit 1214) to the audio unit 1216 second sound information for providing a second sound output having one or more characteristics that change according to controlling movement of the user interface object from the second position on the display unit 1202 to the third position on the display unit 1202 in response to and when the second input is received.
In some embodiments, the one or more characteristics include a pitch of the first sound output, a volume of the first sound output, and/or a distribution of the first sound output over the plurality of spatial channels.
In some embodiments, the audio unit 1216 is coupled with a plurality of speakers corresponding to a plurality of spatial channels. Providing the audio unit 1216 with first sound information for providing a first sound output includes: the distribution of the first sound output over the plurality of spatial channels is determined (e.g., with the audio enabling unit 1214) according to a direction controlling movement of the user interface object from the first position on the display unit 1202 to the second position on the display unit 1202.
In some embodiments, the audio unit 1216 is coupled with a plurality of speakers corresponding to a plurality of spatial channels. Providing the audio unit 1216 with first sound information for providing a first sound output includes: the distribution of the first sound output over the plurality of spatial channels is determined (e.g., with the audio enabling unit 1214) according to controlling the position of the user interface object on the display unit 1202 during the movement of the user interface object from the second position on the display unit 1202 to the third position on the display unit 1202.
In some embodiments, providing the audio unit 1216 with first sound information for providing a first sound output includes: the volume of the first sound output is determined (e.g., with the audio enabling unit 1214) from controlling the speed of movement of the user interface object from the first position on the display unit 1202 to the second position on the display unit 1202.
In some embodiments, the control user interface object is a slider on a slider bar. The pitch of the first sound output varies according to controlling the positioning (e.g., position) of the user interface object on the slider bar.
In some embodiments, the control user interface object is a slider on a slider bar. The second position on the display unit 1202 is not the end point of the slider. The processing unit 1208 is configured to receive (e.g., with the receiving unit 1212) input (e.g., on the touch-sensitive surface unit 1204) corresponding to a respective interaction with the control user interface object on the display unit 1202. The processing unit 1208 is configured to, in response to receiving an input corresponding to a respective interaction with the control user interface object on the display unit 1202, provide (e.g., with the display enabling unit 1210) to the display unit 1202 data for moving the control user interface object to a fourth position on the display unit 1202 in accordance with the input, wherein the fourth position on the display unit 1202 is an end point of the slider; and provides (e.g., with the audio enabling unit 1214) to the audio unit 1216 sound information for providing a third sound output to indicate that the control user interface object is at the end of the slider, where the third sound output is different from the first sound output.
In some embodiments, the processing unit 1208 is configured to, in response to receiving a first input corresponding to a first interaction with a control user interface object on the display unit 1202, provide (e.g., with the display enabling unit 1210) to the display unit 1202 data for moving the control user interface object from a first location on the display unit 1202 to a second location on the display unit 1202 that is different from the first location on the display unit 1202 according to the first input, and visually distinguish (e.g., with the display enabling unit 1210) the control user interface object according to the first input during movement of the control user interface object from the first location on the display unit 1202 to the second location on the display unit 1202.
According to some embodiments, the processing unit 1208 is configured to provide (e.g., with the display enabling unit 1210) to the display unit 1202 data for presenting a first user interface having a plurality of user interface objects, wherein the current focus is on the first user interface object of the plurality of user interface objects. The processing unit 1208 is configured to receive (e.g., with the receiving unit 1212) an input (e.g., on the touch-sensitive surface unit 1204) corresponding to a request for changing a position of the current focus in the first user interface when the display unit 1202 presents the first user interface, the input having a direction and an amplitude. The processing unit 1208 is configured to provide (e.g., with the display enabling unit 1210) data for moving the current focus from the first user interface object to a second user interface object to be selected for the current focus in response to receiving an input corresponding to a request for changing the position of the current focus in the first user interface, wherein the second user interface object is selected for the current focus according to the direction and/or the magnitude of the input; and provides (e.g., with the audio enabling unit 1214) to the audio unit 1216 first sound information for providing a first sound output corresponding to movement of the current focus from the first user interface object to the second user interface object, wherein the first sound output is provided concurrently with the movement of the current focus from the first user interface object to the display of the second user interface object. The pitch of the first sound output is determined (e.g., by the audio enabling unit 1214) based at least in part on the size of the first user interface object, the type of the first user interface object, the size of the second user interface object, and/or the type of the second user interface object.
In some embodiments, the volume of the first sound output is determined (e.g., by the audio enabling unit 1214) based on the magnitude of the input.
In some embodiments, in accordance with a determination that the magnitude of the input meets a predetermined input criterion, the volume of the first sound output is reduced (e.g., by the audio enabling unit 1214).
In some embodiments, the distribution of the first sound output over the plurality of spatial channels is adjusted (e.g., by the audio enabling unit 1214) according to the position of the second user interface object in the first user interface.
In some embodiments, the pitch of the first sound output is determined (e.g., by the audio enabling unit 1214) based on the size of the second user interface object and/or the type of the second user interface object. In response to receiving an input corresponding to a request to change the position of the current focus in the first user interface, the processing unit 1208 is configured to provide (e.g., with the audio enabling unit 1214) to the audio unit 1216 second sound information for providing a second sound output corresponding to movement of the current focus from the first user interface object to the second user interface object, wherein the pitch of the second sound output is determined based at least in part on the size of the first user interface object and/or the type of the first user interface object.
In some embodiments, in accordance with a determination that the magnitude of the input meets a predetermined input criterion, the release of the first sound output is reduced (e.g., by the audio enabling unit 1214).
In some embodiments, the processing unit 1208 is configured to provide (e.g., with the display enabling unit 1210) data to the display unit 1202 for moving the current focus from the second user interface object to the third user interface object in response to receiving one or more inputs (e.g., via the receiving unit 1212) corresponding to one or more requests for changing the position of the current focus in the first user interface; providing (e.g., with the audio enabling unit 1214) to the audio unit 1216 third sound information for providing a third sound output corresponding to movement of the current focus from the second user interface object to the third user interface object, wherein the third sound output is provided concurrently with the movement of the current focus from the second user interface object to the display of the third user interface object; providing (e.g., with display enabling unit 1210) data to display unit 1202 for moving the current focus from the third user interface object to the fourth user interface object; and provides (e.g., with the audio enabling unit 1214) to the audio unit 1216 fourth sound information for providing a fourth sound output corresponding to movement of the current focus from the third user interface object to the fourth user interface object, wherein the fourth sound output is provided concurrently with the movement of the current focus from the third user interface object to the display of the fourth user interface object. The sound output corresponding to the movement of the current focus to the largest one of the second, third, and fourth user interface objects has a lower pitch than the respective sound output corresponding to the movement of the current focus to the remaining two of the second, third, and fourth user interface objects. The sound output corresponding to the movement of the current focus to the smallest of the second, third, and fourth user interface objects has a higher pitch than the corresponding sound output corresponding to the movement of the current focus to the remaining two of the second, third, and fourth user interface objects.
In some embodiments, a first user interface having a plurality of user interface objects is included in a hierarchy of user interfaces. The processing unit 1208 is configured to receive (e.g., with the receiving unit 1212) an input (e.g., on the touch-sensitive surface unit 1204) corresponding to a request for replacing the first user interface with the second user interface in the hierarchy of user interfaces when the display unit 1202 presents the first user interface having the plurality of user interface objects; and in response to receiving an input corresponding to a request to replace the first user interface with the second user interface, providing (e.g., with display enabling unit 1210) data to display unit 1202 for replacing the first user interface with the second user interface; in accordance with a determination that the first user interface is located above the second user interface in the hierarchy of user interfaces, fifth sound information for providing a fifth sound output is provided to the audio unit 1216 (e.g., with the audio enabling unit 1214); and in accordance with a determination that the first user interface is located below the second user interface in the hierarchy of user interfaces, providing (e.g., with the audio enabling unit 1214) to the audio unit 1216 sixth sound information for providing a sixth sound output different from the fifth sound output.
In some embodiments, the processing unit 1208 is configured to receive (e.g., with the receiving unit 1212) an input (e.g., on the touch-sensitive surface unit 1204) corresponding to a request to activate a user interface object having a current focus when the display unit 1202 presents the first user interface; in response to receiving an input corresponding to a request to activate a user interface object having a current focus: in accordance with a determination that the first user interface object has a current focus, providing (e.g., with the audio enabling unit 1214) to the audio unit 1216 seventh sound information for providing a seventh sound output, the seventh sound output corresponding to activation of the first user interface object; and in accordance with a determination that the second user interface object has a current focus, provide eighth sound information to the audio unit 1216 for providing an eighth sound output, the eighth sound output corresponding to activation of the second user interface object. The eighth sound output is different from the seventh sound output. The relationship between the one or more characteristics of the sound output corresponding to the movement of the current focus to the first user interface object and the one or more characteristics of the second sound output corresponds to the relationship between the one or more characteristics of the seventh sound output and the one or more characteristics of the eighth sound output.
According to some embodiments, the processing unit 1208 is configured to provide (e.g., with the display enabling unit 1210) the display unit 1202 with data for presenting a first video information user interface comprising descriptive information about the first video; providing (e.g., with the audio enabling unit 1214) to the audio unit 1216 for providing sound information corresponding to a first sound output of the first video during presentation of the first video information user interface by the display unit 1202; when the display unit 1202 presents a first video information user interface including descriptive information about the first video, receiving (e.g., with the receiving unit 1212) an input (e.g., on the touch-sensitive surface unit 1204) corresponding to a request for playback of the first video; in response to receiving an input corresponding to a request for playback of the first video, providing (e.g., with display enabling unit 1210) to display unit 1202 data for replacing presentation of the first video information user interface with playback of the first video; during playback of the first video, receiving (e.g., with the receiving unit 1212) an input (e.g., on the touch-sensitive surface unit 1204) corresponding to a request for displaying a second video information user interface with respect to the first video; in response to receiving an input corresponding to a request for displaying a second video information user interface regarding a first video, data for replacing playback of the first video with the second video information user interface regarding the first video is provided to the display unit 1202, and sound information for providing a second sound output corresponding to the first video, different from the first sound output, during presentation of the second video information user interface by the display unit 1202 is provided to the audio unit 1216.
In some embodiments, the first sound output and/or the second sound output are selected from a sound track of the first video.
In some embodiments, the second audio output is an audio track of the first video corresponding to a location in the first video that is being played when an input corresponding to a request to display the second video information user interface is received.
In some embodiments, in accordance with a determination that an input corresponding to a request to display a second video information user interface is received within a predetermined duration from an end of the first video, an end subtitle sound track of the first video is selected (e.g., by the audio enabling unit 1214) for the second sound output.
In some embodiments, the processing unit 1208 is configured to: after initiating playback of the first video, receiving (e.g., with the receiving unit 1212) an input (e.g., on the touch-sensitive surface unit 1204) corresponding to a request to pause the first video; and in response to receiving an input corresponding to a request to pause the first video, pause (e.g., with the display enabling unit 1210) playback of the first video at a first playback position in a timeline of the first video; providing (e.g., with display enabling unit 1210) data to display unit 1202 for presenting one or more selected still images from the first video, wherein the one or more selected still images are selected based on a first playback position at which the first video was paused; and provides (e.g., with the audio enabling unit 1214) to the audio unit 1216 sound information for providing sound output corresponding to the sound track of the first video at the first playback position.
In some embodiments, the processing unit 1208 is configured to: after initiating playback of the first video, receiving (e.g., with the receiving unit 1212) an input (e.g., on the touch-sensitive surface unit 1204) corresponding to a request to pause the first video; and in response to receiving an input corresponding to a request to pause the first video, pause (e.g., with the display enabling unit 1210) playback of the first video at a first playback position in a timeline of the first video; the data for presenting the one or more selected still images from the first video is provided to the display unit 1202 (e.g., with the display enabling unit 1210), wherein the one or more selected still images are selected based on the first playback position at which the first video was paused, and the sound information for providing the sound output corresponding to the one or more characteristics of the first video at the first playback position is provided to the audio unit 1216 (e.g., with the audio enabling unit 1214).
In some embodiments, the first video information user interface includes a plurality of user interface objects. A first user interface object of the plurality of user interface objects is configured to: when selected, the initiating electronic device 1200 provides (e.g., with the audio enabling unit 1214) to the audio unit 1216 sound information for providing sound output corresponding to at least a portion of the first sound track of the first video. A second user interface object of the plurality of user interface objects is configured to: when selected, the initiating electronic device 1200 provides (e.g., with the audio enabling unit 1214) to the audio unit 1216 sound information for providing a sound output corresponding to at least a portion of a second sound track of the first video that is different from the first sound track.
In some embodiments, the first sound output and/or the second sound output are selected based on one or more characteristics of the first video.
In some embodiments, the processing unit 1208 is configured to: before the display unit 1202 presents the first video information user interface, providing data for presenting a video selection user interface comprising a representation of a plurality of videos to the display unit 1202; and receiving (e.g., with the receiving unit 1212) an input (e.g., on the touch-sensitive surface unit 1204) corresponding to a selection of a representation of a first video of the plurality of videos, wherein in response to receiving the input corresponding to the selection of the representation of the first video, a first video information user interface for the first video is presented.
Fig. 13 illustrates a functional block diagram of an electronic device 1300 configured in accordance with the principles of various described embodiments, according to some embodiments. The functional blocks of the device are optionally implemented by hardware, software, firmware, or a combination thereof to implement the principles of the various described embodiments. Those skilled in the art will appreciate that the functional blocks described in fig. 13 may alternatively be combined or separated into sub-blocks to implement the principles of the various described embodiments. Accordingly, the description herein optionally supports any possible combination or separation of the functional blocks described herein or further definition.
As shown in fig. 13, an electronic device 1300 communicates with a display unit 1302. The display unit 1302 is configured to display video playback information. In some embodiments, electronic device 1300 communicates with audio unit 1312. The electronic device 1300 includes a processing unit 1304 in communication with a display unit 1302 and, in some embodiments, an audio unit 1312. In some embodiments, the processing unit 1304 includes: a data providing unit 1306, an input receiving unit 1308, a suspending unit 1310, and a sound providing unit 1314.
The processing unit 1304 is configured to: providing (e.g., with a data providing unit 1306) data for presenting the first video to the display unit 1302; when the display unit 1302 presents the first video, an input corresponding to a user request for suspending the first video is received (e.g., with the input receiving unit 1308); in response to receiving an input corresponding to a user request to pause the first video, pausing (e.g., using the pause unit 1310) presentation of the first video at a first playback position in a timeline of the first video; and after suspending the presentation of the first video at the first playback position in the timeline of the first video and when the presentation of the first video is suspended, providing (e.g., with the data providing unit 1306) to the display unit 1302 data for presenting a plurality of selected still images from the first video, wherein the plurality of selected still images are selected based on the first playback position at which the first video is suspended.
In some embodiments, the plurality of selected still images are selected from a range of playback positions for the first video between a first playback position in the timeline and a second playback position in the timeline that precedes the first playback position.
In some embodiments, the second playback position in the timeline leads the first playback position by a predetermined time interval.
In some embodiments, the second playback position in the timeline leads the first playback position by a time interval determined after receiving an input corresponding to a request to pause the first video.
In some embodiments, the plurality of selected still images of the video include still images that are discontinuous in the video for any other still image of the plurality of selected still images.
In some embodiments, the plurality of selected still images includes a representative frame.
In some embodiments, the device 1300 is in communication with the audio unit 1312, and the processing unit 1304 is further configured to provide (e.g., with the sound information providing unit 1314) sound information to the audio unit 1312 for providing a first sound output corresponding to the first video presented on the display unit 1302.
In some embodiments, the processing unit 1304 is configured to provide (e.g., with the sound information providing unit 1314) sound information to the audio unit 1312 for providing a sound output selected based on the first playback position at which the first video was paused.
In some embodiments, the processing unit 1304 is configured to: after the presentation of the first video is paused at the first playback position in the timeline of the first video and when the presentation of the first video is paused, data for presenting the animation indicating the transition to the slide mode is provided (e.g., using the data providing unit 1306) to the display unit 1302.
In some embodiments, the animation that indicates the transition to slide mode includes a countdown clock.
In some embodiments, the processing unit 1304 is configured to repeatedly provide the display unit 1302 with data for presenting the plurality of selected still images from the first video.
In some embodiments, the processing unit 1304 is configured to provide (e.g., using the data providing unit 1306) the display unit 1302 with data representing a corresponding still image of the plurality of selected still images with the pan effect and/or the zoom effect.
Alternatively, the operations in the information processing method described above are implemented by running one or more functional modules in an information processing apparatus such as a general-purpose processor (e.g., as described above with reference to fig. 1A and 3) or a dedicated chip.
Alternatively, the operations described above with reference to fig. 6A to 6C, fig. 7A to 7D, fig. 8A to 8C, and fig. 10A to 10B are implemented by the components depicted in fig. 1A to 1B or fig. 10 or fig. 11. For example, optionally, receive operation 704, receive operation 804, and receive operation 910 are implemented by event sorter 170, event recognizer 180, and event handler 190. An event monitor 171 in the event sorter 170 detects a contact on the touch-sensitive display 112 and an event dispatcher module 174 delivers event information to the application 136-1. The respective event identifier 180 of the application 136-1 compares the event information to the respective event definition 186 and determines whether the first contact at the first location on the touch-sensitive surface (or whether the rotation of the device) corresponds to a predetermined event or sub-event, such as the selection of an object on the user interface or the rotation of the device from one orientation to another. When a respective predetermined event or sub-event is detected, the event identifier 180 activates an event handler 190 associated with the detection of the event or sub-event. Event handler 190 optionally uses or invokes data updater 176 or object updater 177 to update the application internal state 192. In some embodiments, event handler 190 accesses a respective GUI updater 178 to update content displayed by the application. Similarly, it will be apparent to one of ordinary skill in the art how other processes may be implemented based on the components depicted in fig. 1A-1B.
The foregoing description, for purposes of explanation, has been described with reference to specific implementations. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various described embodiments with various modifications as are suited to the particular use contemplated.

Claims (16)

1. A method, comprising:
at an electronic device having one or more processors and memory, wherein the device is in communication with a display and an audio system:
providing data to the display for presenting a user interface having a plurality of user interface objects including a control user interface object at a first location on the display, wherein the control user interface object is configured to control a respective parameter;
receiving a first input, the first input corresponding to a first interaction with the control user interface object on the display; and
Upon receiving the first input corresponding to the first interaction with the control user interface object on the display:
providing data to the display for moving the control user interface object from the first location on the display to a second location on the display in accordance with the first input, the second location on the display being different from the first location on the display; and
providing first sound information to the audio system for providing a first sound output having one or more characteristics that are different from the corresponding parameters controlled by the control user interface object and that change in accordance with movement of the control user interface object from the first position on the display to the second position on the display.
2. The method according to claim 1, wherein:
in accordance with a determination that the first input meets a first input criterion, the first sound output has a first set of characteristics; and
in accordance with a determination that the first input meets a second input criterion, the first sound output has a second set of characteristics different from the first set of characteristics.
3. The method according to any one of claims 1-2, comprising:
upon responding to the first input, receiving a second input, the second input corresponding to a second interaction with the control user interface object on the display;
responsive to receiving the second input corresponding to the second interaction with the control user interface object on the display and upon receiving the second input:
providing data to the display for moving the control user interface object from the second location on the display to a third location on the display in accordance with the second input, the third location on the display being different from the second location on the display; and
providing second sound information to the audio system for providing a second sound output, the second sound output having one or more characteristics that change in accordance with movement of the control user interface object from the second position on the display to the third position on the display.
4. The method of any of claims 1-2, wherein the one or more characteristics include a pitch of the first sound output, a volume of the first sound output, and/or a distribution of the first sound output over a plurality of spatial channels.
5. The method of any one of claims 1-2, wherein:
the audio system is coupled with a plurality of speakers, the plurality of speakers corresponding to a plurality of spatial channels; and
providing the first sound information for providing the first sound output to the audio system includes: determining a distribution of the first sound output over the plurality of spatial channels according to a direction of the movement of the control user interface object from the first position on the display to the second position on the display.
6. A method according to claim 3, wherein:
the audio system is coupled with a plurality of speakers, the plurality of speakers corresponding to a plurality of spatial channels; and
providing the first sound information for providing the first sound output to the audio system includes: determining a distribution of the first sound output over the plurality of spatial channels based on a position of the control user interface object on the display during the movement of the control user interface object from the second position on the display to the third position on the display.
7. The method of any one of claims 1-2, wherein:
Providing the first sound information for providing the first sound output to the audio system includes: a volume of the first sound output is determined based on a speed of the movement of the control user interface object from the first position on the display to the second position on the display.
8. The method of any one of claims 1-2, wherein:
the control user interface object is a slider on a slider bar; and
the pitch of the first sound output varies according to the position of the control user interface object on the slider.
9. The method of any one of claims 1-2, wherein:
the control user interface object is a slider on a slider bar;
the second location on the display is not the end point of the slider; and
the method comprises the following steps:
receiving input corresponding to a respective interaction with the control user interface object on the display; and
responsive to receiving the input corresponding to the respective interaction with the control user interface object on the display:
providing data to the display for moving the control user interface object to a fourth location on the display in accordance with the input, wherein the fourth location on the display is an end point of the slider; and
Providing sound information to the audio system for providing a third sound output to indicate that the control user interface object is located at the end of the slider bar, wherein the third sound output is different from the first sound output.
10. The method of any one of claims 1-2, wherein:
responsive to receiving the first input corresponding to the first interaction with the control user interface object on the display:
providing data to the display for moving the control user interface object from the first position on the display to the second position on the display in accordance with the first input, the second position on the display being different from the first position on the display, and visually distinguishing the control user interface object in accordance with the first input during the movement of the control user interface object from the first position on the display to the second position on the display.
11. The method of any of claims 1-2, wherein the one or more characteristics different from the respective parameters controlled by the control user interface object include at least one of: volume, pitch, and stereo balance.
12. The method of claim 11, wherein the respective parameter controlled by the control user interface object is a current location in a media item.
13. The method of any of claims 1-2, wherein a first characteristic of the first sound output changes from a first value to a second value different from the first value when the first input is received, wherein the first characteristic is different from the corresponding parameter controlled by the control user interface object.
14. The method of any of claims 1-2, wherein the first sound output is provided concurrently and consecutively when the first input is received.
15. An electronic device, comprising:
one or more processors; and
a memory storing one or more programs for execution by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 1-14.
16. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device, cause the device to perform the method of any of claims 1-14.
CN201910417641.XA 2015-09-08 2016-08-15 Apparatus, method and graphical user interface for providing audiovisual feedback Active CN110109730B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910417641.XA CN110109730B (en) 2015-09-08 2016-08-15 Apparatus, method and graphical user interface for providing audiovisual feedback

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201562215244P 2015-09-08 2015-09-08
US62/215,244 2015-09-08
US14/866,570 US9928029B2 (en) 2015-09-08 2015-09-25 Device, method, and graphical user interface for providing audiovisual feedback
US14/866,570 2015-09-25
CN201610670699.1A CN106502638B (en) 2015-09-08 2016-08-15 For providing the equipment, method and graphic user interface of audiovisual feedback
CN201910417641.XA CN110109730B (en) 2015-09-08 2016-08-15 Apparatus, method and graphical user interface for providing audiovisual feedback

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201610670699.1A Division CN106502638B (en) 2015-09-08 2016-08-15 For providing the equipment, method and graphic user interface of audiovisual feedback

Publications (2)

Publication Number Publication Date
CN110109730A CN110109730A (en) 2019-08-09
CN110109730B true CN110109730B (en) 2023-04-28

Family

ID=56799573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910417641.XA Active CN110109730B (en) 2015-09-08 2016-08-15 Apparatus, method and graphical user interface for providing audiovisual feedback

Country Status (2)

Country Link
CN (1) CN110109730B (en)
AU (2) AU2016101424A4 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10691406B2 (en) 2017-02-16 2020-06-23 Microsoft Technology Licensing, Llc. Audio and visual representation of focus movements
CN110351601B (en) * 2019-07-01 2021-09-17 湖南科大天河通信股份有限公司 Civil air defense propaganda education terminal equipment and method
CN115993885A (en) * 2021-10-20 2023-04-21 华为技术有限公司 Touch feedback method and electronic equipment
CN115114475B (en) * 2022-08-29 2022-11-29 成都索贝数码科技股份有限公司 Audio retrieval method for matching short video sounds with live soundtracks of music

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6297818B1 (en) * 1998-05-08 2001-10-02 Apple Computer, Inc. Graphical user interface having sound effects for operating control elements and dragging objects
JP2005322125A (en) * 2004-05-11 2005-11-17 Sony Corp Information processing system, information processing method, and program
CN1956516B (en) * 2005-10-28 2010-05-26 深圳Tcl新技术有限公司 Method for displaying TV function surface by image and combined with voice
US8098856B2 (en) * 2006-06-22 2012-01-17 Sony Ericsson Mobile Communications Ab Wireless communications devices with three dimensional audio systems
US8519964B2 (en) * 2007-01-07 2013-08-27 Apple Inc. Portable multifunction device, method, and graphical user interface supporting user navigations of graphical objects on a touch screen display
KR100842733B1 (en) * 2007-02-05 2008-07-01 삼성전자주식회사 Method for user interface of multimedia playing device with touch screen
US20080229206A1 (en) * 2007-03-14 2008-09-18 Apple Inc. Audibly announcing user interface elements
US20090013254A1 (en) * 2007-06-14 2009-01-08 Georgia Tech Research Corporation Methods and Systems for Auditory Display of Menu Items
US20090125811A1 (en) * 2007-11-12 2009-05-14 Microsoft Corporation User interface providing auditory feedback
KR100867400B1 (en) * 2008-08-14 2008-11-06 (주)펜타비전 Apparatus for broadcasting audio game
US9037468B2 (en) * 2008-10-27 2015-05-19 Sony Computer Entertainment Inc. Sound localization for user in motion
US8190438B1 (en) * 2009-10-14 2012-05-29 Google Inc. Targeted audio in multi-dimensional space
WO2012140525A1 (en) * 2011-04-12 2012-10-18 International Business Machines Corporation Translating user interface sounds into 3d audio space
JP5821307B2 (en) * 2011-06-13 2015-11-24 ソニー株式会社 Information processing apparatus, information processing method, and program
KR101875743B1 (en) * 2012-01-10 2018-07-06 엘지전자 주식회사 Mobile terminal and control method therof
JP2013214192A (en) * 2012-04-02 2013-10-17 Sharp Corp Locator device, method for controlling locator device, control program, and computer-readable recording medium
KR101867513B1 (en) * 2012-05-29 2018-06-15 엘지전자 주식회사 Mobile terminal and control method thereof
KR20130134195A (en) * 2012-05-30 2013-12-10 삼성전자주식회사 Apparatas and method fof high speed visualization of audio stream in a electronic device
JP6154597B2 (en) * 2012-11-16 2017-06-28 任天堂株式会社 Information processing program, information processing apparatus, information processing system, and information processing method
CN103455237A (en) * 2013-08-21 2013-12-18 中兴通讯股份有限公司 Menu processing method and device
US10585486B2 (en) * 2014-01-03 2020-03-10 Harman International Industries, Incorporated Gesture interactive wearable spatial audio system
CN103914303A (en) * 2014-04-10 2014-07-09 福建伊时代信息科技股份有限公司 Method and device for presenting progress bars
CN104618788B (en) * 2014-12-29 2018-08-07 北京奇艺世纪科技有限公司 A kind of method and device of display video information

Also Published As

Publication number Publication date
AU2017100472B4 (en) 2018-02-08
AU2017100472A4 (en) 2017-05-25
AU2016101424A4 (en) 2016-09-15
CN110109730A (en) 2019-08-09

Similar Documents

Publication Publication Date Title
AU2021201361B2 (en) Device, method, and graphical user interface for providing audiovisual feedback
US10613634B2 (en) Devices and methods for controlling media presentation
CN113253844B (en) Apparatus, method and graphical user interface for interacting with user interface objects and providing feedback
US11132120B2 (en) Device, method, and graphical user interface for transitioning between user interfaces
JP6516790B2 (en) Device, method and graphical user interface for adjusting the appearance of a control
CN111427530B (en) Apparatus, method and graphical user interface for dynamically adjusting presentation of audio output
US10120531B2 (en) User interfaces for navigating and playing content
AU2015280257B2 (en) Character recognition on a computing device
AU2024200812A1 (en) Column interface for navigating in a user interface
CN110109730B (en) Apparatus, method and graphical user interface for providing audiovisual feedback
US20240249379A1 (en) Automatic cropping of video content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant